ไฟล์ใหญ่เกิน 4GB. โปรแกรมมันขึ้นข้อความเตือน One or more files exceed the filesystem size limit these files can not be saved properly do you want to continue anyway
วิธี แก้ไข
ต้องเปลี่ยนให้ไดร์ทำงานเป็นระบบntfsจากเดิมคือfat32
1. คลิ๊ก Start > Run
2. พิมพ์ cmd แล้วกด Enter
3. จากนั้นพิมพ์ CONVERT C: /FS:NTFScopyไปเลยครับ
ตรงC:คือไดร์ที่เราจะเปลี่ยนเก็บไว้ไดร์ไหนแล้วก็พิมพ์ไดร์นั่นเข้าไปเช่นเซฟไว้ไดร์Dก็เปลี่ยนเป็น
CONVERT D: /FS:NTFS
การสร้าง แผ่น boot linux
1. ทำ Boot Disk จาก MS-Dos โดยOpen the Command Prompt under Windows:
Start ==> Programs ==> Command Prompt
C:\> d:
D:\> cd \dosutils
D:\dosutils> rawriteEnter disk image source file name: ..\images\boot.imgEnter target diskette drive: a:Please insert a formatted diskette into drive A: and press --ENTER-- :
D:\dosutils>
2. ทำ Boot Disk จาก Linux-Like OS โดยใส่แผ่น Diskette ที่ format แล้วใน Drive A: และ CD RedHat แผ่นที่ 1 ใน CD-ROM
[root@one root]# mount /dev/cdrom /mnt/cdrom[root@one root]# cd /mnt/cdrom/images/[root@one images]# dd if=boot.img of=/dev/fd0H1440 bs=1440k1+0 records in1+0 records out[root@one images]# cd /[root@one root]# umount /mnt/cdrom
Start ==> Programs ==> Command Prompt
C:\> d:
D:\> cd \dosutils
D:\dosutils> rawriteEnter disk image source file name: ..\images\boot.imgEnter target diskette drive: a:Please insert a formatted diskette into drive A: and press --ENTER-- :
D:\dosutils>
2. ทำ Boot Disk จาก Linux-Like OS โดยใส่แผ่น Diskette ที่ format แล้วใน Drive A: และ CD RedHat แผ่นที่ 1 ใน CD-ROM
[root@one root]# mount /dev/cdrom /mnt/cdrom[root@one root]# cd /mnt/cdrom/images/[root@one images]# dd if=boot.img of=/dev/fd0H1440 bs=1440k1+0 records in1+0 records out[root@one images]# cd /[root@one root]# umount /mnt/cdrom
คำสั่ง tar
คำสั่ง tar เป็นคำสั่งที่ใช้สำารองไฟล์ลง Tape Backup
แต่ยังใช้สำาหรับบีบอัดไฟล์หรือไดเรกทอรีให้มีขนาดเล็กลง ให้รวมเป็น .tar ไฟล์เดียว เมื่อระบุออปชัน -z
ก็จะบีบอัดด้วยโปรแกรม gzip อีกครั้งหนึ่ง ซึ่งจะมีนามสกุล .tar.gz
รูปแบบการใช้คำาสั่ง tar ออปชัน ชื่อไฟล์ที่บีบอัด ไฟล์หรือไดเรกทอรีที่ต้องการบีบอัด
ออปชันของ tar
c สร้าง archive ไฟล์
x กู้ข้อมูลจาก archive ไฟล์
v แสดงรายละเอียดของการ tar
z บีบอัดด้วย gzip
-f file กำาหนดชื่อของ archive ไฟล์ ซึ่งจะเป็นไฟล์ธรรมดา หรือไฟล์อุปกรณ์ก็ได้
ตัวอย่าง การบีบอัดไฟล์ด้วย tar และ gzip
[root@training1 backup]# tar cvfz postgres_data.tar.gz
/usr/local/pgsql/data
/usr/local/pgsql/data/
/usr/local/pgsql/data/pg_ident.conf
/usr/local/pgsql/data/postgresql.conf
/usr/local/pgsql/data/pg_xlog/
...
[root@training1 backup]# ls
postgres_data.tar.gz
แต่ยังใช้สำาหรับบีบอัดไฟล์หรือไดเรกทอรีให้มีขนาดเล็กลง ให้รวมเป็น .tar ไฟล์เดียว เมื่อระบุออปชัน -z
ก็จะบีบอัดด้วยโปรแกรม gzip อีกครั้งหนึ่ง ซึ่งจะมีนามสกุล .tar.gz
รูปแบบการใช้คำาสั่ง tar ออปชัน ชื่อไฟล์ที่บีบอัด ไฟล์หรือไดเรกทอรีที่ต้องการบีบอัด
ออปชันของ tar
c สร้าง archive ไฟล์
x กู้ข้อมูลจาก archive ไฟล์
v แสดงรายละเอียดของการ tar
z บีบอัดด้วย gzip
-f file กำาหนดชื่อของ archive ไฟล์ ซึ่งจะเป็นไฟล์ธรรมดา หรือไฟล์อุปกรณ์ก็ได้
ตัวอย่าง การบีบอัดไฟล์ด้วย tar และ gzip
[root@training1 backup]# tar cvfz postgres_data.tar.gz
/usr/local/pgsql/data
/usr/local/pgsql/data/
/usr/local/pgsql/data/pg_ident.conf
/usr/local/pgsql/data/postgresql.conf
/usr/local/pgsql/data/pg_xlog/
...
[root@training1 backup]# ls
postgres_data.tar.gz
ลินุกส์ Init run levels
The idea behind operating different services at different run levels essentially revolves around the fact that different systems can be used in different ways. Some services cannot be used until the system is in a particular state, or mode, such as being ready for more than one user or having networking available. There are times in which you may want to operate the system in a lower mode. Examples are fixing disk corruption problems in run level 1 so no other users can possibly be on the system, or leaving a server in run level 3 without an X session running. In these cases, running services that depend upon a higher system mode to function does not make sense because they will not work correctly anyway. By already having each service assigned to start when its particular run level is reached, you ensure an orderly start up process, and you can quickly change the mode of the machine without worrying about which services to manually start or stop. Available run levels are generally described in /etc/inittab, which is partially shown below:
# inittab This file describes how the INIT process should set up
# the system in a certain run−level.
# Default runlevel. The runlevels are:
# 0 − halt (Do NOT set initdefault to this)
# 1 − Single user mode
# 2 − Multiuser, without NFS
# (The same as 3, if you do not have networking)
# 3 − Full multiuser mode
# 4 − unused
# 5 − X11
# 6 − reboot (Do NOT set initdefault to this)
#
id:5:initdefault:
# inittab This file describes how the INIT process should set up
# the system in a certain run−level.
# Default runlevel. The runlevels are:
# 0 − halt (Do NOT set initdefault to this)
# 1 − Single user mode
# 2 − Multiuser, without NFS
# (The same as 3, if you do not have networking)
# 3 − Full multiuser mode
# 4 − unused
# 5 − X11
# 6 − reboot (Do NOT set initdefault to this)
#
id:5:initdefault:
mount ไฟล์ system
mount point หรือ ไดเรกทอรีที่ต้องการ mount
mount point คือ ไดเรกทอรีที่ต้องการเอาไฟล์บนอุปกรณ์ที่จะเมาท์มาแสดง ไดเรกทอรีที่นิยมสำาหรับการ
เมาท์ คือ /mnt เช่น /mnt/floppy , /mnt/cdrom, แต่ในปัจจุบันในลีนุกซ์เวอร์ชันใหม่ๆ จะนิยมเอา mount point
ไปไว้ที่ /media เช่น /media/cdrom, /media/hda1
ชื่ออุปกรณ์ที่จะ mount
/dev/cdrom เป็น CD/DVD Drive
/dev/fd0 เป็น Flopy Disk
/dev/hda1 เป็น ฮาร์ดดิสก์ Primary Master IDE พาร์ติชันที่ 1
/dev/sda1 เป็น ฮาร์ดดิสก์ SCSI พาร์ติชันที่ 1
/dev/sda เป็น อุปกรณ์พวก Thumb drive USB ถ้าฮาร์ดดิสก์ เป็น /dev/sda อุปกรณ์พวกนี้ ก็จะเป็น
/dev/hdb จะมีหมายเลขพาร์ติชันหรือไม่ขึ้นกับอุปกรณ์นั้นๆ ต้องลองเช่น บางอัน mount /dev/sda บางอันก็ mount
/dev/sda1
การใช้คำาสั่ง mount
mount options device dir หรือ mount options device dir
ตัวอย่าง
mount -t vfat /dev/hda1 /mnt/hda1
mount /dev/fd0 /mnt/fd0
mount /dev/cdrom /mnt/cdrom
mount /dev/sda /mnt/thumb
mount /dev/cdrom
mount /mnt/cdrom
การ mount แบบย่อ mount options device dir เช่น mount /mnt/cdrom การที่จะใช้คำาสั่งแบบนี้ได้
จะต้องมีข้อมูลของอุปกรณ์ หรือ พาร์ติชันอยู่ในไฟล์ /etc/fstab ก่อน
ก่อนที่จะ mount directory /mnt/fd0, /mnt/cdrom, /mnt/thumb จะเป็นไดเรกทอรีเปล่าๆ เมื่อ mount
ได้สำาเร็จในไดเรกทอรีเหล่านั้นก็จะมีไฟล์ต่างที่อยู่ซึ่งเป็นไฟล์ในอุปกรณ์ที่ mount ขึ้นมานั่นเอง
ยกเลิกการเมาท์
เมื่อเมาท์ได้สำาเร็จ จะต้องยกเลิกการเมาท์ การยกเลิกการเมาท์ ใช้คำาสั่ง umount เช่น CD-ROM จะเอาแผ่น
CD ออกไม่ได้ถ้าไม่ยกเลิกการเมาท์ หรืออาจทำาความเสียหายให้กับอุปกรณ์ประเภท USB ได้
การใช้งาน umount
umount option dir device
ตัวอย่างการยกเลิกการเมาท์ด้วยคำาสั่ง umount
umount /mnt/fd0
umount /mnt/cdrom
umount /mn/thumb
คำาสั่งที่เกี่ยวข้องในบทนี้
mount
umount
mount point คือ ไดเรกทอรีที่ต้องการเอาไฟล์บนอุปกรณ์ที่จะเมาท์มาแสดง ไดเรกทอรีที่นิยมสำาหรับการ
เมาท์ คือ /mnt เช่น /mnt/floppy , /mnt/cdrom, แต่ในปัจจุบันในลีนุกซ์เวอร์ชันใหม่ๆ จะนิยมเอา mount point
ไปไว้ที่ /media เช่น /media/cdrom, /media/hda1
ชื่ออุปกรณ์ที่จะ mount
/dev/cdrom เป็น CD/DVD Drive
/dev/fd0 เป็น Flopy Disk
/dev/hda1 เป็น ฮาร์ดดิสก์ Primary Master IDE พาร์ติชันที่ 1
/dev/sda1 เป็น ฮาร์ดดิสก์ SCSI พาร์ติชันที่ 1
/dev/sda เป็น อุปกรณ์พวก Thumb drive USB ถ้าฮาร์ดดิสก์ เป็น /dev/sda อุปกรณ์พวกนี้ ก็จะเป็น
/dev/hdb จะมีหมายเลขพาร์ติชันหรือไม่ขึ้นกับอุปกรณ์นั้นๆ ต้องลองเช่น บางอัน mount /dev/sda บางอันก็ mount
/dev/sda1
การใช้คำาสั่ง mount
mount options device dir หรือ mount options device dir
ตัวอย่าง
mount -t vfat /dev/hda1 /mnt/hda1
mount /dev/fd0 /mnt/fd0
mount /dev/cdrom /mnt/cdrom
mount /dev/sda /mnt/thumb
mount /dev/cdrom
mount /mnt/cdrom
การ mount แบบย่อ mount options device dir เช่น mount /mnt/cdrom การที่จะใช้คำาสั่งแบบนี้ได้
จะต้องมีข้อมูลของอุปกรณ์ หรือ พาร์ติชันอยู่ในไฟล์ /etc/fstab ก่อน
ก่อนที่จะ mount directory /mnt/fd0, /mnt/cdrom, /mnt/thumb จะเป็นไดเรกทอรีเปล่าๆ เมื่อ mount
ได้สำาเร็จในไดเรกทอรีเหล่านั้นก็จะมีไฟล์ต่างที่อยู่ซึ่งเป็นไฟล์ในอุปกรณ์ที่ mount ขึ้นมานั่นเอง
ยกเลิกการเมาท์
เมื่อเมาท์ได้สำาเร็จ จะต้องยกเลิกการเมาท์ การยกเลิกการเมาท์ ใช้คำาสั่ง umount เช่น CD-ROM จะเอาแผ่น
CD ออกไม่ได้ถ้าไม่ยกเลิกการเมาท์ หรืออาจทำาความเสียหายให้กับอุปกรณ์ประเภท USB ได้
การใช้งาน umount
umount option dir device
ตัวอย่างการยกเลิกการเมาท์ด้วยคำาสั่ง umount
umount /mnt/fd0
umount /mnt/cdrom
umount /mn/thumb
คำาสั่งที่เกี่ยวข้องในบทนี้
mount
umount
คำสั่ง RPM
โปรแกรมบนลีนุกซ์ส่วนใหญ่จะเขียนด้วยภาษา C ในการติดตั้งต้องเอา source code ของโปรแกรมมา
compile ด้วย 3 คำาสั่งหลัก ./configure, make, make install ซึ่งเป็นเรื่องยาก และไม่สะดวกสำาหรับผู้ใช้งานทั่วไป เพราะฉะนั้นลีนุกซ์แต่ละค่าย ก็พยายามที่จะอำานวยความสะดวกในการติดตั้งโปรแกรมให้กับผู้ใช้งาน ก็จะมีวิธีการ และเทคโนโลยีที่ต่างๆกันไป เช่น ลีนุกซ์ Debain ubuntu ใช้ apt-get, Red Hat ใช้ rpm (RPM Package Manager)
รูปแบบของไฟล์ RPM
name ชื่อ Package
version เวอร์ชัน
release ปรับปรุงครั้งที่
architecture i386, i586, athlon : Intel x86 Compatible Alpha : Digital Alpha/AXP
ia64 : IA-64 (Itanium) s300: S/390
noarch architecture-independency code
ตัวอย่าง
postgresql-7.3.2-3.i386.rpm
ชื่อ package คือ postgresql
version คือ 7.3.2
release คือ 3
architecture คือ i386
setup-2.5.25-1.noarch.rpm
noarch คือ ไม่ขึ้นกับสถาปัตถยกรรม CPU
ติดตั้งและลบ package (โปรแกรม)
nstall: rpm -i ติดตั้ง
Upgrade: rpm -U อัพเกรด
Freshen: rpm -F อัพเกรดถ้ามีอยู่ / ถ้าไม่มีไม่ทำาอะไร
Erase: rpm -e ลบ
Output option: -v, -h แสดงเครื่องหมาย # ขณะทำางาน
rpm Query
รูปแบบ
rpm -q what_package what_information
• Package Options;
• -a
• package_name
• -f filename
• -p package_file_name
• Information Options:
• Default: package name
• -i: general information
• -l: file list
ตัวอย่างการใช้คำาสั่ง rpm query
rpm -qa มี Package อะไรติดตั้งอยู่บ้าง
rpm -qi postgresql ดูข้อมูลของ Package
rpm -ql postgresql ดูว่ามีไฟล์อะไรอยู่บ้างใน Package postgresql
rpm -qf /usr/bin/psql ไฟล์นี้อยู่ใน Package อะไร
rpm -qlp postgresql-7.3.2-3.i386.rpm ไฟล์นี้ติดตั้งแล้วไปมีไฟล์อะไรบ้างไปติดตั้งอยู่ที่ไหน
rpm -qip zip-2.3-16.i386.rpm ดูข้อมูลของไฟล์นี้
compile ด้วย 3 คำาสั่งหลัก ./configure, make, make install ซึ่งเป็นเรื่องยาก และไม่สะดวกสำาหรับผู้ใช้งานทั่วไป เพราะฉะนั้นลีนุกซ์แต่ละค่าย ก็พยายามที่จะอำานวยความสะดวกในการติดตั้งโปรแกรมให้กับผู้ใช้งาน ก็จะมีวิธีการ และเทคโนโลยีที่ต่างๆกันไป เช่น ลีนุกซ์ Debain ubuntu ใช้ apt-get, Red Hat ใช้ rpm (RPM Package Manager)
รูปแบบของไฟล์ RPM
name ชื่อ Package
version เวอร์ชัน
release ปรับปรุงครั้งที่
architecture i386, i586, athlon : Intel x86 Compatible Alpha : Digital Alpha/AXP
ia64 : IA-64 (Itanium) s300: S/390
noarch architecture-independency code
ตัวอย่าง
postgresql-7.3.2-3.i386.rpm
ชื่อ package คือ postgresql
version คือ 7.3.2
release คือ 3
architecture คือ i386
setup-2.5.25-1.noarch.rpm
noarch คือ ไม่ขึ้นกับสถาปัตถยกรรม CPU
ติดตั้งและลบ package (โปรแกรม)
nstall: rpm -i ติดตั้ง
Upgrade: rpm -U อัพเกรด
Freshen: rpm -F อัพเกรดถ้ามีอยู่ / ถ้าไม่มีไม่ทำาอะไร
Erase: rpm -e ลบ
Output option: -v, -h แสดงเครื่องหมาย # ขณะทำางาน
rpm Query
รูปแบบ
rpm -q what_package what_information
• Package Options;
• -a
• package_name
• -f filename
• -p package_file_name
• Information Options:
• Default: package name
• -i: general information
• -l: file list
ตัวอย่างการใช้คำาสั่ง rpm query
rpm -qa มี Package อะไรติดตั้งอยู่บ้าง
rpm -qi postgresql ดูข้อมูลของ Package
rpm -ql postgresql ดูว่ามีไฟล์อะไรอยู่บ้างใน Package postgresql
rpm -qf /usr/bin/psql ไฟล์นี้อยู่ใน Package อะไร
rpm -qlp postgresql-7.3.2-3.i386.rpm ไฟล์นี้ติดตั้งแล้วไปมีไฟล์อะไรบ้างไปติดตั้งอยู่ที่ไหน
rpm -qip zip-2.3-16.i386.rpm ดูข้อมูลของไฟล์นี้
Setting the Default Route
It should come as no surprise to a close reader (hint), that the default route was removed at the execution of ifconfig eth0 down. The crucial final step is configuring the default route.
Example 1-8. Adding a default route with route
[root@morgan]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.99.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo
[root@morgan]# route add default gw 192.168.99.254
[root@morgan]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.99.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo
0.0.0.0 192.168.99.254 0.0.0.0 UG 0 0 0 eth0
The routing table on morgan should look exactly like the initial routing table on tristan. These changes to the routing table on morgan will stay in effect until they are manually changed, the network is restarted, or the machine reboots.With knowledge of the addressing scheme of a network, and the use of ifconfig and route it’s simple to readdress a machine on just about any Ethernet you can attach to. The benefits of familiarity with these commands extend to non-Ethernet IP networks as well, because these commands operate on the IP layer, independent of the link layer.
Example 1-8. Adding a default route with route
[root@morgan]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.99.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo
[root@morgan]# route add default gw 192.168.99.254
[root@morgan]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.99.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo
0.0.0.0 192.168.99.254 0.0.0.0 UG 0 0 0 eth0
The routing table on morgan should look exactly like the initial routing table on tristan. These changes to the routing table on morgan will stay in effect until they are manually changed, the network is restarted, or the machine reboots.With knowledge of the addressing scheme of a network, and the use of ifconfig and route it’s simple to readdress a machine on just about any Ethernet you can attach to. The benefits of familiarity with these commands extend to non-Ethernet IP networks as well, because these commands operate on the IP layer, independent of the link layer.
Command Network setup
arp This program lets the user read or modify their arp cache.
dig(1) Send domain name query packets to name servers for debugging or testing.
finger Display information about the system users.
ftp File transfer program.
ifconfig Configure a network interface.
ifdown Shutdown a network interface.
ifup Brings a network interface up. Ex: ifup eth0
ipchains IP firewall administration used to set input, forward, and output rules.
netconf A GUI interactive program to let you configure a network on Redhat systems.
netconfig Another GUI step by step network configuration program.
netstat Displays information about the systems network connections, including port
connections, routing tables, and more. The command "netstar -r" will display the
routing table.
nslookup Used to query DNS servers for information about hosts.
pftp Same as ftp.
ping Send ICMP ECHO_REQUEST packets to network hosts.
portmap DARPA port to RPC program number mapper. Must be running to make RPC calls.
rarp Manipulate the system's RARP table.
rcp Remote file copy. Copies files between two machines.
dig(1) Send domain name query packets to name servers for debugging or testing.
finger Display information about the system users.
ftp File transfer program.
ifconfig Configure a network interface.
ifdown Shutdown a network interface.
ifup Brings a network interface up. Ex: ifup eth0
ipchains IP firewall administration used to set input, forward, and output rules.
netconf A GUI interactive program to let you configure a network on Redhat systems.
netconfig Another GUI step by step network configuration program.
netstat Displays information about the systems network connections, including port
connections, routing tables, and more. The command "netstar -r" will display the
routing table.
nslookup Used to query DNS servers for information about hosts.
pftp Same as ftp.
ping Send ICMP ECHO_REQUEST packets to network hosts.
portmap DARPA port to RPC program number mapper. Must be running to make RPC calls.
rarp Manipulate the system's RARP table.
rcp Remote file copy. Copies files between two machines.
mkdir, touch Creating Directories and Files
mkdir (MaKe DIRectory) is used to create directories. Its syntax is simple:
mkdir [options] [directory ...]
Only one option is worth noting: the -p option. It does two things:
1. it will create parent directories if they did not exist previously. Without this option, mkdir would just fail, complaining that the said parent directories do not exist;
2. it will return silently if the directory you wanted to create already exists. Similarly, if you did not specify the -p option, mkdir will send back an error message, complaining that the directory already exists.
Here are some examples:
• mkdir foo: creates a directory foo in the current directory;
• mkdir -p images/misc docs: creates the misc directory in the images directory. First, it creates the latter if it does not exist (-p); it also creates a directory named docs in the current directory.
Initially, the touch command was not intended for creating files but for updating file access and modification times1. However, touch will create the files listed as empty files if they do not exist. The syntax is:
touch [options] file [file...]
So running the command:
touch file1 images/file2
will create an empty file called file1 in the current directory and an empty file file2 in directory images, if the files did not previously exist.
mkdir [options]
Only one option is worth noting: the -p option. It does two things:
1. it will create parent directories if they did not exist previously. Without this option, mkdir would just fail, complaining that the said parent directories do not exist;
2. it will return silently if the directory you wanted to create already exists. Similarly, if you did not specify the -p option, mkdir will send back an error message, complaining that the directory already exists.
Here are some examples:
• mkdir foo: creates a directory foo in the current directory;
• mkdir -p images/misc docs: creates the misc directory in the images directory. First, it creates the latter if it does not exist (-p); it also creates a directory named docs in the current directory.
Initially, the touch command was not intended for creating files but for updating file access and modification times1. However, touch will create the files listed as empty files if they do not exist. The syntax is:
touch [options] file [file...]
So running the command:
touch file1 images/file2
will create an empty file called file1 in the current directory and an empty file file2 in directory images, if the files did not previously exist.
tar zip and unzip
The standard2 compression format under UNIX® systems is the gzip format, developed by the GNU project, and is considered as one of the best general compression tools.
gzip is often associated with a utility named tar. tar is a survivor of antediluvian times, when computerists stored their data on tapes. Nowadays, floppy disks, CD-ROM and DVD have replaced tapes, but tar is still being used to create archives. All the files in a directory can be appended in a single file for instance. This file can then be easily compressed with gzip.
This is the reason why much free software is available as tar archives, compressed with gzip. So, their extensions are .tar.gz (or also .tgz to shorten).
To decompress this archive, gzip and then tar can be used. But the GNU version of tar (gtar) allows us to use gzip “on-the-fly”, and to decompress an archive file without noticing each step (and without the need for the extra disk space). The use of tar follows this format:
tar <.tar.gz file> [files]
The option is not required. If it is omitted, processing will be made on the whole archive. This argumentdoes not need to be specified to extract the contents of a .tar.gz archive.
For instance:
$ tar xvfz guile-1.3.tar.gz
-rw-r--r-- 442/1002 10555 1998-10-20 07:31 guile-1.3/Makefile.in
-rw-rw-rw- 442/1002 6668 1998-10-20 06:59 guile-1.3/README
-rw-rw-rw- 442/1002 2283 1998-02-01 22:05 guile-1.3/AUTHORS
-rw-rw-rw- 442/1002 17989 1997-05-27 00:36 guile-1.3/COPYING
-rw-rw-rw- 442/1002 28545 1998-10-20 07:05 guile-1.3/ChangeLog
-rw-rw-rw- 442/1002 9364 1997-10-25 08:34 guile-1.3/INSTALL
-rw-rw-rw- 442/1002 1223 1998-10-20 06:34 guile-1.3/Makefile.am
-rw-rw-rw- 442/1002 98432 1998-10-20 07:30 guile-1.3/NEWS
-rw-rw-rw- 442/1002 1388 1998-10-20 06:19 guile-1.3/THANKS
-rw-rw-rw- 442/1002 1151 1998-08-16 21:45 guile-1.3/TODO
...
Among the options of tar:
• v makes tar verbose. This means it will display all the files it finds in the archive on the screen. If this option is omitted, the processing will be silent.
• f is a required option.Without it, tar tries to use a tape instead of an archive file (i.e., the /dev/rmt0 device).
• z allows you to process a “gziped” archive (with a .gz extension). If this option is forgotten, tar will producean error. Conversely, this option must not be used with an uncompressed archive.
tar allows you to perform several actions on an archive (extract, read, create, add...). An option defines which action is used:
• x: allows you to extract files from the archive.
• t: lists the contents of the archive.
• c: allows you to create an archive. You may use it to backup your personal files, for instance.
• r: allows you to add files at the end of the archive. It cannot be used on an already compressed archive.
gzip is often associated with a utility named tar. tar is a survivor of antediluvian times, when computerists stored their data on tapes. Nowadays, floppy disks, CD-ROM and DVD have replaced tapes, but tar is still being used to create archives. All the files in a directory can be appended in a single file for instance. This file can then be easily compressed with gzip.
This is the reason why much free software is available as tar archives, compressed with gzip. So, their extensions are .tar.gz (or also .tgz to shorten).
To decompress this archive, gzip and then tar can be used. But the GNU version of tar (gtar) allows us to use gzip “on-the-fly”, and to decompress an archive file without noticing each step (and without the need for the extra disk space). The use of tar follows this format:
tar
The
For instance:
$ tar xvfz guile-1.3.tar.gz
-rw-r--r-- 442/1002 10555 1998-10-20 07:31 guile-1.3/Makefile.in
-rw-rw-rw- 442/1002 6668 1998-10-20 06:59 guile-1.3/README
-rw-rw-rw- 442/1002 2283 1998-02-01 22:05 guile-1.3/AUTHORS
-rw-rw-rw- 442/1002 17989 1997-05-27 00:36 guile-1.3/COPYING
-rw-rw-rw- 442/1002 28545 1998-10-20 07:05 guile-1.3/ChangeLog
-rw-rw-rw- 442/1002 9364 1997-10-25 08:34 guile-1.3/INSTALL
-rw-rw-rw- 442/1002 1223 1998-10-20 06:34 guile-1.3/Makefile.am
-rw-rw-rw- 442/1002 98432 1998-10-20 07:30 guile-1.3/NEWS
-rw-rw-rw- 442/1002 1388 1998-10-20 06:19 guile-1.3/THANKS
-rw-rw-rw- 442/1002 1151 1998-08-16 21:45 guile-1.3/TODO
...
Among the options of tar:
• v makes tar verbose. This means it will display all the files it finds in the archive on the screen. If this option is omitted, the processing will be silent.
• f is a required option.Without it, tar tries to use a tape instead of an archive file (i.e., the /dev/rmt0 device).
• z allows you to process a “gziped” archive (with a .gz extension). If this option is forgotten, tar will producean error. Conversely, this option must not be used with an uncompressed archive.
tar allows you to perform several actions on an archive (extract, read, create, add...). An option defines which action is used:
• x: allows you to extract files from the archive.
• t: lists the contents of the archive.
• c: allows you to create an archive. You may use it to backup your personal files, for instance.
• r: allows you to add files at the end of the archive. It cannot be used on an already compressed archive.
Emacs
Emacs is probably the most powerful text editor in existence. It can do absolutely everything and is infinitely extensible through its built-in lisp-based programming language. With Emacs, you can move around the web, read your mail, take part in Usenet newsgroups, make coffee, and so on. This is not to say that you will learn how to do all of that in this chapter, but you will get a good start with opening Emacs, editing one or more files, saving them and quitting Emacs.
If, after reading this, you wish to learn more about Emacs, you can have a look at this Tutorial Introduction to GNU Emacs
Invoking Emacs is done as follows:
emacs [file] [file...]
Emacs will open every file entered as an argument into a separate buffer, with a maximum of two buffersvisible at a time. If you start Emacs without specifying any files on the command line you will be placed intoa buffer called *scratch*. If you are in X, menus will be available, but in this chapter we will concentrate on working strictly with the keyboard.
Getting Started
It’s now time to get some hands-on experience. For our example, let us start by opening two files, file1 and file2. If these files do not exist, they will be created as soon as you write something in them:
$ emacs file1 file2
By typing that command, the following window will be displayed:
As you can see, one buffer has been created. A third one is also present at the bottom of the screen (where you
see (New file)). That is the mini-buffer. You cannot access this buffer directly. You must be invited by Emacs during interactive entries. To change the current buffer, type Ctrl+x o. You type text just as in a “normal” editor, deleting characters with the DEL or Backspace key.
To move around, you can use the arrow keys, or you could use the following key combinations: Ctrl+a to go to the beginning of the line, Ctrl+e to go to the end of the line, Alt+<> to go to the end of the buffer. There are many other combinations, even ones for each of the arrow keys 2. Once you are ready to save your changes to disk, type Ctrl+x Ctrl+s, or if you want to save the contents ofthe buffer to another file, type Ctrl+x Ctrl+w. Emacs will ask you for the name of the file that the contents of the buffer should be written to. You can use completion to do this.
Copy, Cut, Paste, Search
First off, you will need to select the text you want to copy. In this example we want to copy the entire sentence.The first step is to place a mark at beginning of the area. Assuming the cursor is in the position where it is the command sequence would be Ctrl+ SPACE (Control + space bar). Emacs will display the message Mark set in the mini-buffer. Next, move to the beginning of the line with Ctrl+a. The area selected for copying or cutting is the entire area located between the mark and the cursor’s current position, so in this case it will be the entire line of text. There are two command sequences available: Alt+w (to copy) or Ctrl+w (to cut). If you copy, Emacs will briefly return to the mark position so that you can view the selected area.
If, after reading this, you wish to learn more about Emacs, you can have a look at this Tutorial Introduction to GNU Emacs
Invoking Emacs is done as follows:
emacs [file] [file...]
Emacs will open every file entered as an argument into a separate buffer, with a maximum of two buffersvisible at a time. If you start Emacs without specifying any files on the command line you will be placed intoa buffer called *scratch*. If you are in X, menus will be available, but in this chapter we will concentrate on working strictly with the keyboard.
Getting Started
It’s now time to get some hands-on experience. For our example, let us start by opening two files, file1 and file2. If these files do not exist, they will be created as soon as you write something in them:
$ emacs file1 file2
By typing that command, the following window will be displayed:
As you can see, one buffer has been created. A third one is also present at the bottom of the screen (where you
see (New file)). That is the mini-buffer. You cannot access this buffer directly. You must be invited by Emacs during interactive entries. To change the current buffer, type Ctrl+x o. You type text just as in a “normal” editor, deleting characters with the DEL or Backspace key.
To move around, you can use the arrow keys, or you could use the following key combinations: Ctrl+a to go to the beginning of the line, Ctrl+e to go to the end of the line, Alt+<> to go to the end of the buffer. There are many other combinations, even ones for each of the arrow keys 2. Once you are ready to save your changes to disk, type Ctrl+x Ctrl+s, or if you want to save the contents ofthe buffer to another file, type Ctrl+x Ctrl+w. Emacs will ask you for the name of the file that the contents of the buffer should be written to. You can use completion to do this.
Copy, Cut, Paste, Search
First off, you will need to select the text you want to copy. In this example we want to copy the entire sentence.The first step is to place a mark at beginning of the area. Assuming the cursor is in the position where it is the command sequence would be Ctrl+ SPACE (Control + space bar). Emacs will display the message Mark set in the mini-buffer. Next, move to the beginning of the line with Ctrl+a. The area selected for copying or cutting is the entire area located between the mark and the cursor’s current position, so in this case it will be the entire line of text. There are two command sequences available: Alt+w (to copy) or Ctrl+w (to cut). If you copy, Emacs will briefly return to the mark position so that you can view the selected area.
cd (Change Directory)
The cd command is just like the DOS one, with extras. It does just what its acronym states, changes the working directory. You can use . and .., which respectively stand for the current and parent directories. Typing cd alone will take you back to your home directory. Typing cd - will take you back to the last directory you visited. And lastly, you can specify peter’s home directory by typing cd ~peter (~ on its own means your own home/ directory). Note that as a normal user, you cannot usually get into another user’s home/ directory (unless they explicitly authorized it or if this is the default configuration on the system), unless you are root, become root and practice:
# cd /usr/share/doc/HOWTO
# pwd
/usr/share/doc/HOWTO
# cd ../FAQ-Linux
# pwd
/usr/share/doc/FAQ-Linux
# cd ../../../lib
# pwd
/usr/lib
# cd ~peter
# pwd
/home/peter
# cd
# pwd
/root
# cd /usr/share/doc/HOWTO
# pwd
/usr/share/doc/HOWTO
# cd ../FAQ-Linux
# pwd
/usr/share/doc/FAQ-Linux
# cd ../../../lib
# pwd
/usr/lib
# cd ~peter
# pwd
/home/peter
# cd
# pwd
/root
System shutdown
shutdown command, which brings the system down in a secure way, should always be used for halting or rebooting the system. All logged in users are notified that the system is going down, and login is blocked.
It is possible to shut the system down immediately, or after a specified delay. All processes are first notified that the system is going down by the signal SIGTERM. This gives programs like vi the time to save the file being edited, mail and news processing programs a chance to exit cleanly.
The shutdown command does its job by signalling the init process, asking it to change the runlevel. The syntax for shutdown is shown below:
shutdown option(s) time warning_message
Common shutdown options are:
- c Cancel an already running shutdown. With this option
it is of course not possible to give the time argument, but
you can enter a explanatory message on the command line
that will be sent to all users.
- F Force fsck on reboot
- f Skip fsck on reboot.
- h Halt after shutdown.
- k Don't really shutdown; only send the warning messages to
everybody.
- r Reboot after shutdown
- t SEC Tell init to wait SECconds between sending processes the
warning and the kill signal, and changing to another runlevel.
The shutdown command also requires a time argument. Common time
arguments are:
- hh:ss Shutdown at the time specified
- +M Shutdown after M minutes have elapsed
- now Shutdown immediately (an alias for +0)
The warning message is sent to all logged on users periodically until shutdown takes place. If no message is specified a default message is sent.
Only root can use the shutdown command unless the file /etc/shutdown.allowexists, in which case users listed in this file can also shutdown the system. Assuming the key combination CTRL-ALT-DEL is trapped by an appropriate entry in /etc/inittab then the shutdown command will be called when these keys are pressed. This means that everyone who has physical access to the console keyboard could shut the system down. To prevent this, if shutdown is called from init, it checks to see if the file /etc/shutdown.allow is present. It then compares the login names in that file with the list of people that are logged in(from /var/run/utmp). Only if one of those authorised users or root is logged in, will it proceed. Otherwise it will write the following message to the system console:
shutdown: no authorised users logged in
An example of using the shutdown command is illustrated below:
[root@redhat /root]# shutdown -k +1
It is possible to shut the system down immediately, or after a specified delay. All processes are first notified that the system is going down by the signal SIGTERM. This gives programs like vi the time to save the file being edited, mail and news processing programs a chance to exit cleanly.
The shutdown command does its job by signalling the init process, asking it to change the runlevel. The syntax for shutdown is shown below:
shutdown option(s) time warning_message
Common shutdown options are:
- c Cancel an already running shutdown. With this option
it is of course not possible to give the time argument, but
you can enter a explanatory message on the command line
that will be sent to all users.
- F Force fsck on reboot
- f Skip fsck on reboot.
- h Halt after shutdown.
- k Don't really shutdown; only send the warning messages to
everybody.
- r Reboot after shutdown
- t SEC Tell init to wait SECconds between sending processes the
warning and the kill signal, and changing to another runlevel.
The shutdown command also requires a time argument. Common time
arguments are:
- hh:ss Shutdown at the time specified
- +M Shutdown after M minutes have elapsed
- now Shutdown immediately (an alias for +0)
The warning message is sent to all logged on users periodically until shutdown takes place. If no message is specified a default message is sent.
Only root can use the shutdown command unless the file /etc/shutdown.allowexists, in which case users listed in this file can also shutdown the system. Assuming the key combination CTRL-ALT-DEL is trapped by an appropriate entry in /etc/inittab then the shutdown command will be called when these keys are pressed. This means that everyone who has physical access to the console keyboard could shut the system down. To prevent this, if shutdown is called from init, it checks to see if the file /etc/shutdown.allow is present. It then compares the login names in that file with the list of people that are logged in(from /var/run/utmp). Only if one of those authorised users or root is logged in, will it proceed. Otherwise it will write the following message to the system console:
shutdown: no authorised users logged in
An example of using the shutdown command is illustrated below:
[root@redhat /root]# shutdown -k +1
คำสั่ง User and Disk manage
Adding Users Manually
● ขั้นตอนการทำงานของ useradd ซึ่งสามารถทำ manual ได้คือ
– เพิ่มรายละเอียดของ user ใหม่เข้าไปใน /etc/passwd and /etc/shadow
– เพิ่ม group ใหม่ และเพิ่มชื่อเข้าไปใน group เดิมใน /etc/group
– สร้าง Home directory
– copy files from /etc/skel to user's home directory
– set user's password
● ขั้นตอนการลบ
– reverse กระบวนการของการ add user
Adding Users with useradd
● การเปลี่ยน password ของ user ที่เพิ่งสร้างใหม่
– ใช้ option -p แล้วตามด้วย encrypted password ซึ่งไม่สะดวก
– ใช้คำสั่ง passwd หลังจากสร้าง user เสร็จ ซึ่งไม่สะดวก ถ้า...
– ใช้การ redirect password จาก text file เข้าสู่คำสั่ง passwd โดยใช้งานร่วมกับ
การเขียนคำสั่ง shell script
● การลบ user ด้วยคำสั่ง userdel
– userdel -options
● -r – remove home directory
● -f – force delete ไม่ว่า user จะ login อยู่หรือไม่ หรือมี userid ซ้ำกันกับบุคคลอื่น
Mounting the File System
● ทดสอบการ mount new file system โดย
– สร้าง mount point ใน file system เดิม เช่น
● mkdir /mnt/home2
– mount ด้วยคำสั่ง mount
● mount -t ext2 /dev/sda1 /mnt/home2
– unmount ด้วยคำสั่ง umount /mnt/home2
● การ unmount นั้น file system ที่จะ unmount จะต้อง idle
● mount อย่างถาวร โดยแก้ไขใน /etc/fstab
Checking System Space/Disk Usage
● คำสั่ง df ใช้ในการตรวจสอบพื้นที่ของ disk
– ไม่มี Option แสดงพื้นที่ใน disk เป็นจำนวน block ในทุก file system
– -h – แสดงผลลัพธ์ในหน่วยที่เข้าใจได้ง่าย
– -i – แสดงจำนวนของ inode แทนที่จะเป็นพื้นที่ของ data blocks
● คำสั่ง du ใช้ตรวจสอบขนาดเนื้อที่ของ disk ที่ใช้งานไป
– du แสดงว่าแต่ละ file ใน directory_name มีขนาดเท่าใด
– options
● -s – summary แสดงผลรวมของเนื้อที่ที่ทุก file ใช้งานรวมกัน แทนที่จะแสดงแต่ละ file
● -k – แสดงในหน่วยของ kilobytes
● -h – แสดงในหน่วยที่เข้าใจง่าย
● Monitoring system performance ใช้ top หรือ uptime
● ขั้นตอนการทำงานของ useradd ซึ่งสามารถทำ manual ได้คือ
– เพิ่มรายละเอียดของ user ใหม่เข้าไปใน /etc/passwd and /etc/shadow
– เพิ่ม group ใหม่ และเพิ่มชื่อเข้าไปใน group เดิมใน /etc/group
– สร้าง Home directory
– copy files from /etc/skel to user's home directory
– set user's password
● ขั้นตอนการลบ
– reverse กระบวนการของการ add user
Adding Users with useradd
● การเปลี่ยน password ของ user ที่เพิ่งสร้างใหม่
– ใช้ option -p แล้วตามด้วย encrypted password ซึ่งไม่สะดวก
– ใช้คำสั่ง passwd หลังจากสร้าง user เสร็จ ซึ่งไม่สะดวก ถ้า...
– ใช้การ redirect password จาก text file เข้าสู่คำสั่ง passwd โดยใช้งานร่วมกับ
การเขียนคำสั่ง shell script
● การลบ user ด้วยคำสั่ง userdel
– userdel -options
● -r – remove home directory
● -f – force delete ไม่ว่า user จะ login อยู่หรือไม่ หรือมี userid ซ้ำกันกับบุคคลอื่น
Mounting the File System
● ทดสอบการ mount new file system โดย
– สร้าง mount point ใน file system เดิม เช่น
● mkdir /mnt/home2
– mount ด้วยคำสั่ง mount
● mount -t ext2 /dev/sda1 /mnt/home2
– unmount ด้วยคำสั่ง umount /mnt/home2
● การ unmount นั้น file system ที่จะ unmount จะต้อง idle
● mount อย่างถาวร โดยแก้ไขใน /etc/fstab
Checking System Space/Disk Usage
● คำสั่ง df ใช้ในการตรวจสอบพื้นที่ของ disk
– ไม่มี Option แสดงพื้นที่ใน disk เป็นจำนวน block ในทุก file system
– -h – แสดงผลลัพธ์ในหน่วยที่เข้าใจได้ง่าย
– -i – แสดงจำนวนของ inode แทนที่จะเป็นพื้นที่ของ data blocks
● คำสั่ง du ใช้ตรวจสอบขนาดเนื้อที่ของ disk ที่ใช้งานไป
– du
– options
● -s – summary แสดงผลรวมของเนื้อที่ที่ทุก file ใช้งานรวมกัน แทนที่จะแสดงแต่ละ file
● -k – แสดงในหน่วยของ kilobytes
● -h – แสดงในหน่วยที่เข้าใจง่าย
● Monitoring system performance ใช้ top หรือ uptime
Download, Install, and Test VNC
1. On all your Windows machines download and install VNC free edition. Download from
http://www.realvnc.com/cgi-bin/download.cgi . Installation is self guided. This package
includes both server and viewer. Size is about 720 K. (filename is vnc-4_1_2-
x86_win32.exe)
2. On all your Ubuntu Linux machines open System > Administration > Synaptic Package
Manager. Search for VNC. Make sure that “vino,” “vnc-common,” and “xvncviewer” are
already installed. If not, install the missing ones.
3. Test VNC from Windows: Run VNC viewer from your Windows machine. You should
be able to select any Linux or Windows box by hostname and connect to it; that is, if
these boxes are running VNC server. If not, probably you have restricted remote desktop
connections. To unrestrict, right click on "My Computer" and select "Properties." Select
the remote tab. Enable remote desktop connections.
4. Test VNC from Linux: Run VNC from your Linux machine by selecting Applications >
Internet > Terminal Server Client. Under the “'general” tab, type in the hostname of the
other machine you want to connect to, and select the VNC protocol. Then type in your
name (the name you sign in with on each of your machines). Press connect. You will be
asked for a password -- in a tiny box in the upper left corner of your screen. You need not
move your cursor to this password box. Just type your password, and the password box
somehow collects it. You should connect. If not, try removing password protection – at
least during troubleshooting -- by going to System > Preferences > Remote Desktop and
un-checking password protection.
5. Firewalls frequently are the cause of failed connections. If all the above methods fail, it is
a good idea to look at your firewall settings. Assuming you are using Windows built-in
firewall, go to "Control Panel" and select "Windows Firewall." Click on the "Exceptions"
tab. Click the "Add Port" button. Add "VNC-1" and use port 5900. Then, click "Add
Port" again and add "VNC-2" at port 5800.
http://www.realvnc.com/cgi-bin/download.cgi . Installation is self guided. This package
includes both server and viewer. Size is about 720 K. (filename is vnc-4_1_2-
x86_win32.exe)
2. On all your Ubuntu Linux machines open System > Administration > Synaptic Package
Manager. Search for VNC. Make sure that “vino,” “vnc-common,” and “xvncviewer” are
already installed. If not, install the missing ones.
3. Test VNC from Windows: Run VNC viewer from your Windows machine. You should
be able to select any Linux or Windows box by hostname and connect to it; that is, if
these boxes are running VNC server. If not, probably you have restricted remote desktop
connections. To unrestrict, right click on "My Computer" and select "Properties." Select
the remote tab. Enable remote desktop connections.
4. Test VNC from Linux: Run VNC from your Linux machine by selecting Applications >
Internet > Terminal Server Client. Under the “'general” tab, type in the hostname of the
other machine you want to connect to, and select the VNC protocol. Then type in your
name (the name you sign in with on each of your machines). Press connect. You will be
asked for a password -- in a tiny box in the upper left corner of your screen. You need not
move your cursor to this password box. Just type your password, and the password box
somehow collects it. You should connect. If not, try removing password protection – at
least during troubleshooting -- by going to System > Preferences > Remote Desktop and
un-checking password protection.
5. Firewalls frequently are the cause of failed connections. If all the above methods fail, it is
a good idea to look at your firewall settings. Assuming you are using Windows built-in
firewall, go to "Control Panel" and select "Windows Firewall." Click on the "Exceptions"
tab. Click the "Add Port" button. Add "VNC-1" and use port 5900. Then, click "Add
Port" again and add "VNC-2" at port 5800.
linux Moving, copying, deleting & viewing files
ls -l List files in current directory using
long format
ls -F List files in current directory and
indicate the file type
ls -laC List all files in current directory in long format and display in columns
rm name Remove a file or directory called name
rm -rf name Kill off an entire directory and all it’s includes files and subdirectories
cp filename /home/dirname Copy the file called filename to the /home/dirname directory
mv filename /home/dirname Move the file called filename to the /home/dirname directory
cat filetoview Display the file called filetoview
man -k keyword Display man pages containing
keyword more filetoview Display the file called filetoview one page at a time, proceed to next page using the spacebar
head filetoview Display the first 10 lines of the file called filetoview
head -20 filetoview Display the first 20 lines of the file called filetoview
tail filetoview Display the last 10 lines of the file called filetoview
tail -20 filetoview Display the last 20 lines of the file called filetoview
long format
ls -F List files in current directory and
indicate the file type
ls -laC List all files in current directory in long format and display in columns
rm name Remove a file or directory called name
rm -rf name Kill off an entire directory and all it’s includes files and subdirectories
cp filename /home/dirname Copy the file called filename to the /home/dirname directory
mv filename /home/dirname Move the file called filename to the /home/dirname directory
cat filetoview Display the file called filetoview
man -k keyword Display man pages containing
keyword more filetoview Display the file called filetoview one page at a time, proceed to next page using the spacebar
head filetoview Display the first 10 lines of the file called filetoview
head -20 filetoview Display the first 20 lines of the file called filetoview
tail filetoview Display the last 10 lines of the file called filetoview
tail -20 filetoview Display the last 20 lines of the file called filetoview
Simple Mail Transfer Protocol (SMTP)
Mail delivery from a client application to the server, and from an originating server to the destination server is handled by the Simple Mail Transfer Protocol (SMTP) .
The primary purpose of SMTP is to transfer email between mail servers. However, it is critical for email clients as well. In order to send email, the client sends the message to an outgoing mail server, which in turn contacts the destination mail server for delivery. For this reason, it is necessary to specify an SMTP server when conguring an email client. Under Red Hat Linux, a user can congure an SMTP server on the local machine to handle mail delivery. However, it is also possible to congure remote SMTP servers for outgoing mail. One important point to make about the SMTP protocol is that it does not require authentication. This allows anyone on the Internet to send email to anyone else or even to large groups of people. It is this characteristic of SMTP that makes junk email or spam possible. Modern SMTP servers attempt to minimize this behavior by allowing only known hosts access to the SMTP server. Those servers that do not impose such restrictions are called open relay servers.
Red Hat Linux uses Sendmail (/usr/sbin/sendmail) as its default SMTP program. However, a
simpler mail server application called Postx (/usr/sbin/postfix) is also available.
The primary purpose of SMTP is to transfer email between mail servers. However, it is critical for email clients as well. In order to send email, the client sends the message to an outgoing mail server, which in turn contacts the destination mail server for delivery. For this reason, it is necessary to specify an SMTP server when conguring an email client. Under Red Hat Linux, a user can congure an SMTP server on the local machine to handle mail delivery. However, it is also possible to congure remote SMTP servers for outgoing mail. One important point to make about the SMTP protocol is that it does not require authentication. This allows anyone on the Internet to send email to anyone else or even to large groups of people. It is this characteristic of SMTP that makes junk email or spam possible. Modern SMTP servers attempt to minimize this behavior by allowing only known hosts access to the SMTP server. Those servers that do not impose such restrictions are called open relay servers.
Red Hat Linux uses Sendmail (/usr/sbin/sendmail) as its default SMTP program. However, a
simpler mail server application called Postx (/usr/sbin/postfix) is also available.
bind dns
Introduction to DNS
When hosts on a network connect to one another via a hostname, also called a fully qualified domain name (FQDN), DNS is used to associate the names of machines to the IP address for the host. Use of DNS and FQDNs also has advantages for system administrators, allowing the flexibility to change the IP address for a host without effecting name-based queries to the machine. Conversely, administrators can shuffle which machines handle a name-based query.
DNS is normally implemented using centralized servers that are authoritative for some domains and refer to other DNS servers for other domains. When a client host requests information from a nameserver, it usually connects to port 53. The nameserver then attempts to resolve the FQDN based on its resolver library, which may contain authoritative information about the host requested or cached data from an earlier query. If the nameserver does not already have the answer in its resolver library, it queries other nameservers, called root nameservers, to determine which nameservers are authoritative for the FQDN in question. Then, with that information, it queries the authoritative nameservers to determine the IP address of the requested host. If performing a reverse lookup, the same procedure is used, except the query is made with an unknown IP address rather than a name.
Nameserver Zones
On the Internet, the FQDN of a host can be broken down into different sections. These sections are organized into a hierarchy much like a tree, with a main trunk, primary branches, secondary branches, and so forth. Consider the following FQDN:
bob.sales.example.com
When looking at how a FQDN is resolved to find the IP address that relates to a particular system, read the name from right to left, with each level of the hierarchy divided by periods (.). In this example, com defines the top level domain for this FQDN. The name example is a sub-domain under com, while sales is a sub-domain under example. The name furthest to the left, bob, identifies a specific machine.
Except for the hostname, each section is a called a zone, which defines a specific namespace. A namespace controls the naming of the sub-domains to its left. While this example only contains two sub-domains, a FQDN must contain at least one sub-domain but may include many more, depending upon how the namespace is organized.
Zones are defined on authoritative nameservers through the use of zone files, which describe the namespace of that zone, the mail servers to be used for a particular domain or sub-domain, and more. Zone files are stored on primary nameservers (also called master nameservers), which are truly authoritative and where changes are made to the files, and secondary nameservers (also called slave nameservers), which receive their zone files from the primary nameservers. Any nameserver can be a primary and secondary nameserver for different zones at the same time, and they may also be considered authoritative for multiple zones. It all depends on how the nameserver is configured.
Nameserver Types
There are four primary nameserver configuration types:
master — Stores original and authoritative zone records for a certain namespace, answering questions from other nameservers searching for answers concerning that namespace.
slave — Answers queries from other nameservers concerning namespaces for which it is considered an authority. However, slave nameservers get their namespace information from master nameservers.
caching-only — Offers name to IP resolution services but is not authoritative for any zones. Answers for all resolutions are cached in memory for a fixed period of time, which is specified by the retrieved zone record.
forwarding — Forwards requests to a specific list of nameservers for name resolution. If none of the specified nameservers can perform the resolution, the resolution fails.
A nameserver may be one or more of these types. For example, a nameserver can be a master for some zones, a slave for others, and only offer forwarding resolutions for others.
BIND as a Nameserver
BIND name performs name resolution services through the /usr/sbin/named daemon. BIND also includes an administration utility called /usr/sbin/rndc
BIND stores its configuration files in the following two places:
/etc/named.conf — The configuration file for the named daemon.
/var/named/ directory — The named working directory which stores zone, statistic, and cache files.
/etc/named.conf
The named.conf file is a collection of statements using nested options surrounded by opening and closing ellipse characters, { }. Administrators must be careful when editing named.conf to avoid syntactical errors as many seemingly minor errors will prevent the named service from starting.
Example Zone File
Seen individually, directives and resource records can be difficult to grasp. However, when placed together in a single file, they become easier to understand.
The following example shows a very basic zone file.
$ORIGIN example.com
$TTL 86400
@ IN SOA dns1.example.com. hostmaster.example.com. (
2001062501 ; serial
21600 ; refresh after 6 hours
3600 ; retry after 1 hour
604800 ; expire after 1 week
86400 ) ; minimum TTL of 1 day
IN NS dns1.example.com.
IN NS dns2.example.com.
IN MX 10 mail.example.com.
IN MX 20 mail2.example.com.
IN A 10.0.1.5
server1 IN A 10.0.1.5
server2 IN A 10.0.1.7
dns1 IN A 10.0.1.2
dns2 IN A 10.0.1.3
ftp IN CNAME server1
mail IN CNAME server1
mail2 IN CNAME server2
www IN CNAME server2
In this example, standard directives and SOA values are used. The authoritative nameservers are set as dns1.example.com and dns2.example.com, which have A records that tie them to 10.0.1.2 and 10.0.1.3, respectively.
The email servers configured with the MX records point to server1 and server2 via CNAME records. Since the server1 and server2 names do not end in a trailing period (.), the $ORIGIN domain is placed after them, expanding them to server1.example.com and server2.example.com. Through the related A resource records, their IP addresses can be determined.
FTP and Web services, available at the standard ftp.example.com and www.example.com names, are pointed at the appropriate servers using CNAME records.
When hosts on a network connect to one another via a hostname, also called a fully qualified domain name (FQDN), DNS is used to associate the names of machines to the IP address for the host. Use of DNS and FQDNs also has advantages for system administrators, allowing the flexibility to change the IP address for a host without effecting name-based queries to the machine. Conversely, administrators can shuffle which machines handle a name-based query.
DNS is normally implemented using centralized servers that are authoritative for some domains and refer to other DNS servers for other domains. When a client host requests information from a nameserver, it usually connects to port 53. The nameserver then attempts to resolve the FQDN based on its resolver library, which may contain authoritative information about the host requested or cached data from an earlier query. If the nameserver does not already have the answer in its resolver library, it queries other nameservers, called root nameservers, to determine which nameservers are authoritative for the FQDN in question. Then, with that information, it queries the authoritative nameservers to determine the IP address of the requested host. If performing a reverse lookup, the same procedure is used, except the query is made with an unknown IP address rather than a name.
Nameserver Zones
On the Internet, the FQDN of a host can be broken down into different sections. These sections are organized into a hierarchy much like a tree, with a main trunk, primary branches, secondary branches, and so forth. Consider the following FQDN:
bob.sales.example.com
When looking at how a FQDN is resolved to find the IP address that relates to a particular system, read the name from right to left, with each level of the hierarchy divided by periods (.). In this example, com defines the top level domain for this FQDN. The name example is a sub-domain under com, while sales is a sub-domain under example. The name furthest to the left, bob, identifies a specific machine.
Except for the hostname, each section is a called a zone, which defines a specific namespace. A namespace controls the naming of the sub-domains to its left. While this example only contains two sub-domains, a FQDN must contain at least one sub-domain but may include many more, depending upon how the namespace is organized.
Zones are defined on authoritative nameservers through the use of zone files, which describe the namespace of that zone, the mail servers to be used for a particular domain or sub-domain, and more. Zone files are stored on primary nameservers (also called master nameservers), which are truly authoritative and where changes are made to the files, and secondary nameservers (also called slave nameservers), which receive their zone files from the primary nameservers. Any nameserver can be a primary and secondary nameserver for different zones at the same time, and they may also be considered authoritative for multiple zones. It all depends on how the nameserver is configured.
Nameserver Types
There are four primary nameserver configuration types:
master — Stores original and authoritative zone records for a certain namespace, answering questions from other nameservers searching for answers concerning that namespace.
slave — Answers queries from other nameservers concerning namespaces for which it is considered an authority. However, slave nameservers get their namespace information from master nameservers.
caching-only — Offers name to IP resolution services but is not authoritative for any zones. Answers for all resolutions are cached in memory for a fixed period of time, which is specified by the retrieved zone record.
forwarding — Forwards requests to a specific list of nameservers for name resolution. If none of the specified nameservers can perform the resolution, the resolution fails.
A nameserver may be one or more of these types. For example, a nameserver can be a master for some zones, a slave for others, and only offer forwarding resolutions for others.
BIND as a Nameserver
BIND name performs name resolution services through the /usr/sbin/named daemon. BIND also includes an administration utility called /usr/sbin/rndc
BIND stores its configuration files in the following two places:
/etc/named.conf — The configuration file for the named daemon.
/var/named/ directory — The named working directory which stores zone, statistic, and cache files.
/etc/named.conf
The named.conf file is a collection of statements using nested options surrounded by opening and closing ellipse characters, { }. Administrators must be careful when editing named.conf to avoid syntactical errors as many seemingly minor errors will prevent the named service from starting.
Example Zone File
Seen individually, directives and resource records can be difficult to grasp. However, when placed together in a single file, they become easier to understand.
The following example shows a very basic zone file.
$ORIGIN example.com
$TTL 86400
@ IN SOA dns1.example.com. hostmaster.example.com. (
2001062501 ; serial
21600 ; refresh after 6 hours
3600 ; retry after 1 hour
604800 ; expire after 1 week
86400 ) ; minimum TTL of 1 day
IN NS dns1.example.com.
IN NS dns2.example.com.
IN MX 10 mail.example.com.
IN MX 20 mail2.example.com.
IN A 10.0.1.5
server1 IN A 10.0.1.5
server2 IN A 10.0.1.7
dns1 IN A 10.0.1.2
dns2 IN A 10.0.1.3
ftp IN CNAME server1
mail IN CNAME server1
mail2 IN CNAME server2
www IN CNAME server2
In this example, standard directives and SOA values are used. The authoritative nameservers are set as dns1.example.com and dns2.example.com, which have A records that tie them to 10.0.1.2 and 10.0.1.3, respectively.
The email servers configured with the MX records point to server1 and server2 via CNAME records. Since the server1 and server2 names do not end in a trailing period (.), the $ORIGIN domain is placed after them, expanding them to server1.example.com and server2.example.com. Through the related A resource records, their IP addresses can be determined.
FTP and Web services, available at the standard ftp.example.com and www.example.com names, are pointed at the appropriate servers using CNAME records.
linux Ethernet Interfaces
interface les is ifcfg-eth0, which controls the rst Ethernet network interface
card or NIC in the system. In a system with multiple NICs, there are multiple ifcfg-eth
Because each device has its own conguration le, an administrator can control how each interface functions individually. Below is a sample ifcfg-eth0 le for a system using a xed IP address:
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
NETWORK=10.0.1.0
NETMASK=255.255.255.0
IPADDR=10.0.1.27
USERCTL=no
The values required in an interface conguration le can change based on other values. For example, the ifcfg-eth0 le for an interface using DHCP looks quite a bit different, because IP information is provided by the DHCP server:
DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
The Network Administration Tool (redhat-config-network) is an easy way to make changes to
the various network interface conguration les (see the chapter titled Network Conguration in the
Red Hat Linux Customization Guide for detailed instructions on using this tool).
However, it is also possible to edit the conguration les for a given network interface by hand.
Below is a listing of the congurable parameters in an Ethernet interface conguration is one of the following:
. none. No boot-time protocol should be used.
. bootp. The BOOTP protocol should be used.
. dhcp. The DHCP protocol should be used.
is the name of the physical device (except for dynamicallyallocated PPP devices where it is the logical name) is a name server address to be placed in
/etc/resolv.conf if the PEERDNS directive is set to yes.
card or NIC in the system. In a system with multiple NICs, there are multiple ifcfg-eth
Because each device has its own conguration le, an administrator can control how each interface functions individually. Below is a sample ifcfg-eth0 le for a system using a xed IP address:
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
NETWORK=10.0.1.0
NETMASK=255.255.255.0
IPADDR=10.0.1.27
USERCTL=no
The values required in an interface conguration le can change based on other values. For example, the ifcfg-eth0 le for an interface using DHCP looks quite a bit different, because IP information is provided by the DHCP server:
DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
The Network Administration Tool (redhat-config-network) is an easy way to make changes to
the various network interface conguration les (see the chapter titled Network Conguration in the
Red Hat Linux Customization Guide for detailed instructions on using this tool).
However, it is also possible to edit the conguration les for a given network interface by hand.
Below is a listing of the congurable parameters in an Ethernet interface conguration is one of the following:
. none. No boot-time protocol should be used.
. bootp. The BOOTP protocol should be used.
. dhcp. The DHCP protocol should be used.
is the name of the physical device (except for dynamicallyallocated PPP devices where it is the logical name) is a name server address to be placed in
/etc/resolv.conf if the PEERDNS directive is set to yes.
linux redhat Network Configuration Files
The primary network conguration les are as follows:
. /etc/hosts . The main purpose of this le is to resolve hostnames that cannot be resolved any
other way. It can also be used to resolve hostnames on small networks with no DNS server. Regardless of the type of network the computer is on, this le should contain a line specifying the IP address of the loopback device (127.0.0.1) as localhost.localdomain. For more information,
see the hosts man page.
. /etc/resolv.conf.This le species the IP addresses of DNS servers and the search domain.
Unless congured to do otherwise, the network initialization scripts populate this le. For more
information on this le, see the resolv.conf man page.
. /etc/sysconfig/network . Species routing and host information for all network
interfaces. For more information on this le and the directives it accepts, see Section 4.1.23
/etc/sysconfig/network.
. /etc/sysconfig/network-scripts/ifcfg
For each network interface on a Red Hat Linux system, there is a corresponding interface conguration script. Each of these les provide information specic to a particular network interface. See Section 8.2 Interface Conguration Files for more information on this type of le and the directives it accepts.
. /etc/hosts . The main purpose of this le is to resolve hostnames that cannot be resolved any
other way. It can also be used to resolve hostnames on small networks with no DNS server. Regardless of the type of network the computer is on, this le should contain a line specifying the IP address of the loopback device (127.0.0.1) as localhost.localdomain. For more information,
see the hosts man page.
. /etc/resolv.conf.This le species the IP addresses of DNS servers and the search domain.
Unless congured to do otherwise, the network initialization scripts populate this le. For more
information on this le, see the resolv.conf man page.
. /etc/sysconfig/network . Species routing and host information for all network
interfaces. For more information on this le and the directives it accepts, see Section 4.1.23
/etc/sysconfig/network.
. /etc/sysconfig/network-scripts/ifcfg
For each network interface on a Red Hat Linux system, there is a corresponding interface conguration script. Each of these les provide information specic to a particular network interface. See Section 8.2 Interface Conguration Files for more information on this type of le and the directives it accepts.
Linux Runlevel 3
When in runlevel 3, the best way to start an X session is to log in and type startx. The startx
command is a front-end to the xinit command which launches the XFree86 server and connects X clients applications to it. Because the user is already logged into the system at runlevel 3, startx does not launch a display manager or authenticate users. Refer to Section 7.5.2 Runlevel 5 for more information about display managers.
When the startx command is executed, it searches for a .xinitrc le in the user's home directory
to dene the desktop environment and possibly other X client applications to run. If no .xinitrc le
is present, it will use the system default /etc/X11/xinit/xinitrc le instead.
The default xinitrc script then looks for user-dened les and default system les, including .Xresources,
.Xmodmap, and .Xkbmap in the user's home directory and Xresources, Xmodmap, and
Xkbmap in the /etc/X11/ directory. The Xmodmap and Xkbmap les, if they exist, are used by the xmodmap utility to congure the keyboard. The Xresources les are read to assign specic preference values to applications.
After setting these options, the xinitrc script executes all scripts located in the
/etc/X11/xinit/xinitrc.d/ directory. One important script in this directory is xinput, which
congures settings such as the default language.
Next, the xinitrc script tries to execute .Xclients in the user's home directory and turns to
/etc/X11/xinit/Xclients if it cannot be found. The purpose of the Xclients le is to start
the desktop environment or, possibly, just a basic window manager. The .Xclients script in the
user's home directory starts the user-specied desktop environment in the .Xclients-default le.
If .Xclients does not exist in the user's home directory, the standard /etc/X11/init/Xclients
script attempts to start another desktop environment, trying GNOME rst and then KDE followed by twm. The user is returned to a text mode user session after logging out of X from runlevel 3.
command is a front-end to the xinit command which launches the XFree86 server and connects X clients applications to it. Because the user is already logged into the system at runlevel 3, startx does not launch a display manager or authenticate users. Refer to Section 7.5.2 Runlevel 5 for more information about display managers.
When the startx command is executed, it searches for a .xinitrc le in the user's home directory
to dene the desktop environment and possibly other X client applications to run. If no .xinitrc le
is present, it will use the system default /etc/X11/xinit/xinitrc le instead.
The default xinitrc script then looks for user-dened les and default system les, including .Xresources,
.Xmodmap, and .Xkbmap in the user's home directory and Xresources, Xmodmap, and
Xkbmap in the /etc/X11/ directory. The Xmodmap and Xkbmap les, if they exist, are used by the xmodmap utility to congure the keyboard. The Xresources les are read to assign specic preference values to applications.
After setting these options, the xinitrc script executes all scripts located in the
/etc/X11/xinit/xinitrc.d/ directory. One important script in this directory is xinput, which
congures settings such as the default language.
Next, the xinitrc script tries to execute .Xclients in the user's home directory and turns to
/etc/X11/xinit/Xclients if it cannot be found. The purpose of the Xclients le is to start
the desktop environment or, possibly, just a basic window manager. The .Xclients script in the
user's home directory starts the user-specied desktop environment in the .Xclients-default le.
If .Xclients does not exist in the user's home directory, the standard /etc/X11/init/Xclients
script attempts to start another desktop environment, trying GNOME rst and then KDE followed by twm. The user is returned to a text mode user session after logging out of X from runlevel 3.
linux ssh setup
1. On all your Windows machines download and install OpenSSH client. Download from
http://www.wm.edu/it/index.php?id=2928 The download site provides installation
instructions. This package is the client side only, which allows you to initiate a SSH
session from the machine you install it on. Size is about 5.7 MB. (filename is
sshsecureshellclient-3.2.9.exe)
2. On all your Windows machines download and install the COPSSH server for Windows.
To do this, open a browser and search Google for COPSSH. Look for the Sourceforge
download site, and select it. Your download will be from this URL:
http://sourceforge.net/project/showfiles.php?group_id=69227&package_id=127780 .
Note that SSH Servers for Windows are mostly very expensive. To get around this
expense someone developed Cygwin, a tiny Linux system that embeds in Windows and
serves COPSSH from within a Linux environment on your Windows machine. This free
COPSSH- server software allows your Windows PC to "serve" up its information to a
remote PC (Linux or Windows). The access of your Windows-machine's drives is
through the Linux folder called cygdrive.
3. On all your Ubuntu Linux machines open System > Administration > Synaptic Package
Manager. Search for ssh. Make sure openssh-client and openssh-server are both installed.
If they are not, install them.
4. Test SSH from Windows: From within your Windows machine's Start menu, select “SSH
Secure Shell Client.” Then select Secure File Transfer Client. Select Quick Connect, and
type in the name of the machine you want to connect to. Add your username, and click
Connect.
5. Test SSH from Linux: From within your Linux machine, select Places > Connect to
Server. Under Service Type, select SSH. Add the name of the Server you want to connect
to, and press Connect. This should put an icon on your desktop and in your Nautilus file
browser.
http://www.wm.edu/it/index.php?id=2928 The download site provides installation
instructions. This package is the client side only, which allows you to initiate a SSH
session from the machine you install it on. Size is about 5.7 MB. (filename is
sshsecureshellclient-3.2.9.exe)
2. On all your Windows machines download and install the COPSSH server for Windows.
To do this, open a browser and search Google for COPSSH. Look for the Sourceforge
download site, and select it. Your download will be from this URL:
http://sourceforge.net/project/showfiles.php?group_id=69227&package_id=127780 .
Note that SSH Servers for Windows are mostly very expensive. To get around this
expense someone developed Cygwin, a tiny Linux system that embeds in Windows and
serves COPSSH from within a Linux environment on your Windows machine. This free
COPSSH- server software allows your Windows PC to "serve" up its information to a
remote PC (Linux or Windows). The access of your Windows-machine's drives is
through the Linux folder called cygdrive.
3. On all your Ubuntu Linux machines open System > Administration > Synaptic Package
Manager. Search for ssh. Make sure openssh-client and openssh-server are both installed.
If they are not, install them.
4. Test SSH from Windows: From within your Windows machine's Start menu, select “SSH
Secure Shell Client.” Then select Secure File Transfer Client. Select Quick Connect, and
type in the name of the machine you want to connect to. Add your username, and click
Connect.
5. Test SSH from Linux: From within your Linux machine, select Places > Connect to
Server. Under Service Type, select SSH. Add the name of the Server you want to connect
to, and press Connect. This should put an icon on your desktop and in your Nautilus file
browser.
linux vnc setup test
1. On all your Windows machines download and install VNC free edition. Download from
http://www.realvnc.com/cgi-bin/download.cgi . Installation is self guided. This package
includes both server and viewer. Size is about 720 K. (filename is vnc-4_1_2-
x86_win32.exe)
2. On all your Ubuntu Linux machines open System > Administration > Synaptic Package
Manager. Search for VNC. Make sure that “vino,” “vnc-common,” and “xvncviewer” are
already installed. If not, install the missing ones.
3. Test VNC from Windows: Run VNC viewer from your Windows machine. You should
be able to select any Linux or Windows box by hostname and connect to it; that is, if
these boxes are running VNC server. If not, probably you have restricted remote desktop
connections. To unrestrict, right click on "My Computer" and select "Properties." Select
the remote tab. Enable remote desktop connections.
4. Test VNC from Linux: Run VNC from your Linux machine by selecting Applications >
Internet > Terminal Server Client. Under the “'general” tab, type in the hostname of the
other machine you want to connect to, and select the VNC protocol. Then type in your
name (the name you sign in with on each of your machines). Press connect. You will be
asked for a password -- in a tiny box in the upper left corner of your screen. You need not
move your cursor to this password box. Just type your password, and the password box
somehow collects it. You should connect. If not, try removing password protection – at
least during troubleshooting -- by going to System > Preferences > Remote Desktop and
un-checking password protection.
5. Firewalls frequently are the cause of failed connections. If all the above methods fail, it is
a good idea to look at your firewall settings. Assuming you are using Windows built-in
firewall, go to "Control Panel" and select "Windows Firewall." Click on the "Exceptions"
tab. Click the "Add Port" button. Add "VNC-1" and use port 5900. Then, click "Add
Port" again and add "VNC-2" at port 5800.
http://www.realvnc.com/cgi-bin/download.cgi . Installation is self guided. This package
includes both server and viewer. Size is about 720 K. (filename is vnc-4_1_2-
x86_win32.exe)
2. On all your Ubuntu Linux machines open System > Administration > Synaptic Package
Manager. Search for VNC. Make sure that “vino,” “vnc-common,” and “xvncviewer” are
already installed. If not, install the missing ones.
3. Test VNC from Windows: Run VNC viewer from your Windows machine. You should
be able to select any Linux or Windows box by hostname and connect to it; that is, if
these boxes are running VNC server. If not, probably you have restricted remote desktop
connections. To unrestrict, right click on "My Computer" and select "Properties." Select
the remote tab. Enable remote desktop connections.
4. Test VNC from Linux: Run VNC from your Linux machine by selecting Applications >
Internet > Terminal Server Client. Under the “'general” tab, type in the hostname of the
other machine you want to connect to, and select the VNC protocol. Then type in your
name (the name you sign in with on each of your machines). Press connect. You will be
asked for a password -- in a tiny box in the upper left corner of your screen. You need not
move your cursor to this password box. Just type your password, and the password box
somehow collects it. You should connect. If not, try removing password protection – at
least during troubleshooting -- by going to System > Preferences > Remote Desktop and
un-checking password protection.
5. Firewalls frequently are the cause of failed connections. If all the above methods fail, it is
a good idea to look at your firewall settings. Assuming you are using Windows built-in
firewall, go to "Control Panel" and select "Windows Firewall." Click on the "Exceptions"
tab. Click the "Add Port" button. Add "VNC-1" and use port 5900. Then, click "Add
Port" again and add "VNC-2" at port 5800.
Linux Creating an Installation Diskette
The first step in getting Red Hat's distribution of Linux onto a system, you need to find a way of starting the installation program. The usual method of doing so is to create an installation disk, although if you are installing from CD−ROM, and your system's BIOS supports it, you should be able to boot directly into the installation program from the CD.
Otherwise, to create an installation diskette, you'll need to copy the ``boot.img'' (which is simply an image of an ext2−formatted Linux boot diskette with an additional installation program) onto a floppy diskette. The ``boot.img'' file can be obtained from the /images directory of the Red Hat CD−ROM disk, or downloaded via FTP from ftp://ftp.redhat.com in the /pub/redhat/redhat−6.1/i386/images directory (assuming
you are installing Linux on an Intel box).
You can create the boot diskette either from a DOS or Windows system, or from an existing Linux or Unix system. For your destination diskette, you can use either an unformatted or a pre−formatted (for DOS)
diskette −− it makes no difference.
Under DOS: Assuming your CD−ROM is accessible as drive D:, you can type:
d:
cd \images
..\dosutils\rawrite
For the source file, enter ``boot.img''. For the destination file, enter ``a:'' (assuming the
diskette you are created is inserted into the A: drive). The ``rawrite'' program will then
copy the ``boot.img'' file onto diskette.
Under Linux/Unix: Assuming the ``boot.img'' file is located in the current directory (you may need to
mount the CD−ROM under /mnt/cdrom and find the file in /mnt/cdrom/images), you can type:
dd if=boot.img of=/dev/fd0
The ``dd'' utility will copy, as its input file ("if"), the ``boot.img'' file, onto the output file
("of") /dev/fd0 (assuming your floppy drive is accessible from /dev/fd0).
Unless your Linux or Unix system allows write permissions to the floppy device, you may
need to do this command as the superuser. (If you know the root password, type ``su'' to
become the superuser, execute the ``dd'' command, and then type ``exit'' to return to
normal user status).
Otherwise, to create an installation diskette, you'll need to copy the ``boot.img'' (which is simply an image of an ext2−formatted Linux boot diskette with an additional installation program) onto a floppy diskette. The ``boot.img'' file can be obtained from the /images directory of the Red Hat CD−ROM disk, or downloaded via FTP from ftp://ftp.redhat.com in the /pub/redhat/redhat−6.1/i386/images directory (assuming
you are installing Linux on an Intel box).
You can create the boot diskette either from a DOS or Windows system, or from an existing Linux or Unix system. For your destination diskette, you can use either an unformatted or a pre−formatted (for DOS)
diskette −− it makes no difference.
Under DOS: Assuming your CD−ROM is accessible as drive D:, you can type:
d:
cd \images
..\dosutils\rawrite
For the source file, enter ``boot.img''. For the destination file, enter ``a:'' (assuming the
diskette you are created is inserted into the A: drive). The ``rawrite'' program will then
copy the ``boot.img'' file onto diskette.
Under Linux/Unix: Assuming the ``boot.img'' file is located in the current directory (you may need to
mount the CD−ROM under /mnt/cdrom and find the file in /mnt/cdrom/images), you can type:
dd if=boot.img of=/dev/fd0
The ``dd'' utility will copy, as its input file ("if"), the ``boot.img'' file, onto the output file
("of") /dev/fd0 (assuming your floppy drive is accessible from /dev/fd0).
Unless your Linux or Unix system allows write permissions to the floppy device, you may
need to do this command as the superuser. (If you know the root password, type ``su'' to
become the superuser, execute the ``dd'' command, and then type ``exit'' to return to
normal user status).
Linux DNS and Nslookup
The Domain Name Service also known as DNS allows you, the users, to translate names
like www.yahoo.com into a number like 216.32.74.52 which is needed for your computer
to communicate over the network. Networks are controlled by gremlins that only use
numbers not names. Thus DNS is very important if you are going to use the network at
all. Most of the time your ISP provides this service, especially if
you are using a modem, cable modem, DSL, etc. If you are in a large corporate setting,
your own company network will provide DNS.
The main tool you need to check that your DNS is working is nslookup which can translate
names to numbers or vice-versa:
[LocalHost]/home/joe:nslookup www.io.com
Server: flure.pair.com
Address: 209.68.1.159
Name: www.io.com
Addresses: 199.170.88.21, 199.170.88.41, 199.170.88.39
Nslookup tells us there are 3 names that go with www.io.com. Large sites like www.io.com
and www.yahoo.com often have many numbers because they maintain several servers to
handle all the requests to their very busy sites. The server ure.pair.com is our DNS
server as seen by our local DNS setup.
DNS can also translate numbers into names:
[LocalHost]/home/joe:nslookup 199.170.88.21
Server: flure.pair.com
Address: 209.68.1.159
Name: www-02.io.com
Address: 199.170.88.21
This was one of the www.io.com sites listed in the rst example. If DNS can't nd your
name, there is little chance you can connect through the internet to it:
like www.yahoo.com into a number like 216.32.74.52 which is needed for your computer
to communicate over the network. Networks are controlled by gremlins that only use
numbers not names. Thus DNS is very important if you are going to use the network at
all. Most of the time your ISP provides this service, especially if
you are using a modem, cable modem, DSL, etc. If you are in a large corporate setting,
your own company network will provide DNS.
The main tool you need to check that your DNS is working is nslookup which can translate
names to numbers or vice-versa:
[LocalHost]/home/joe:nslookup www.io.com
Server: flure.pair.com
Address: 209.68.1.159
Name: www.io.com
Addresses: 199.170.88.21, 199.170.88.41, 199.170.88.39
Nslookup tells us there are 3 names that go with www.io.com. Large sites like www.io.com
and www.yahoo.com often have many numbers because they maintain several servers to
handle all the requests to their very busy sites. The server ure.pair.com is our DNS
server as seen by our local DNS setup.
DNS can also translate numbers into names:
[LocalHost]/home/joe:nslookup 199.170.88.21
Server: flure.pair.com
Address: 209.68.1.159
Name: www-02.io.com
Address: 199.170.88.21
This was one of the www.io.com sites listed in the rst example. If DNS can't nd your
name, there is little chance you can connect through the internet to it:
ติดตั้ง MySQL ใน MS Windows
download MySQL (Windows binary) http://www.mysql.com/downloads/index.html
1. หลังจากดาวน์โหลดมาแล้วก็ให้แตกไฟล์ออก แล้วก็รันไฟล์ชื่อ Setup.exe ก็จะปรากฏหน้าต่างดังรูป ให้คลิ๊กที่ปุ่ม next
2. เลือกโฟลเดอร์ที่จะทำการติดตั้งโปรแกรม MySQL จากนั้นให้คลิ๊กที่ปุ่ม Next
3. เลือกรูปแบบที่จะทำการติดตั้ง การติดตั้งโดยทั่วไปให้เลือกที่ Typical แล้วคลิ๊กที่ปุ่ม Next
4. เมื่อเสร็จสิ้นการติดตั้งแล้วก็จะมีข้อความแจ้งให้คุณทราบดังรูป ให้คลิ๊กที่ปุ่ม Finish
5. ขั้นตอนต่อไปให้รันโปรแกรม winmysqladmin.exe ซึ่งจะอยู่ใน path เช่น c:\mysql\bin\
6. หลังจากที่รันโปรแกรม winmysqladmin.exe เป็นครั้งแรก ให้คุณใส่ username และ password ที่จะใช้สำหรับ access MySQL แล้วคลิ๊กที่ปุ่ม OK
7. ที่ Taskbar จะเห็นว่ามี icon เล็กๆ เป็นรูปสัญญาณไฟปรากฏอยู่ ซึ่งก็คือ icon ของโปรแกรม winmysqladmin นั่นเอง ถ้าเป็นสัญญาณไฟเขียวก็แสดงว่า MySQL ทำงานอยู่ ถ้าสัญญาณไฟเป็นสีแดงก็แสดงว่า MySQL หยุดทำงาน โดยเราสามารถที่จะใช้ winmysqladmin ในการเปิดปิดโปรแกรมโปรแกรม MySQL ได้
8. ถ้าต้องการให้ MySQL หยุดทำงานก็ให้คลิ๊กขวาที่สัญญาณไฟเลือก Win NT -> Stop the Service
9. ถ้าต้องการให้ MySQL ทำงานก็ให้คลิ๊กขวาที่สัญญาณไฟเลือก Win NT -> Start the Service
10. ถ้าต้องการเปิดหน้าต่างของโปรแกรม mysqladmin ให้คลิ๊กขวาที่สัญญาณไฟแล้วเลือก Show me
1. หลังจากดาวน์โหลดมาแล้วก็ให้แตกไฟล์ออก แล้วก็รันไฟล์ชื่อ Setup.exe ก็จะปรากฏหน้าต่างดังรูป ให้คลิ๊กที่ปุ่ม next
2. เลือกโฟลเดอร์ที่จะทำการติดตั้งโปรแกรม MySQL จากนั้นให้คลิ๊กที่ปุ่ม Next
3. เลือกรูปแบบที่จะทำการติดตั้ง การติดตั้งโดยทั่วไปให้เลือกที่ Typical แล้วคลิ๊กที่ปุ่ม Next
4. เมื่อเสร็จสิ้นการติดตั้งแล้วก็จะมีข้อความแจ้งให้คุณทราบดังรูป ให้คลิ๊กที่ปุ่ม Finish
5. ขั้นตอนต่อไปให้รันโปรแกรม winmysqladmin.exe ซึ่งจะอยู่ใน path เช่น c:\mysql\bin\
6. หลังจากที่รันโปรแกรม winmysqladmin.exe เป็นครั้งแรก ให้คุณใส่ username และ password ที่จะใช้สำหรับ access MySQL แล้วคลิ๊กที่ปุ่ม OK
7. ที่ Taskbar จะเห็นว่ามี icon เล็กๆ เป็นรูปสัญญาณไฟปรากฏอยู่ ซึ่งก็คือ icon ของโปรแกรม winmysqladmin นั่นเอง ถ้าเป็นสัญญาณไฟเขียวก็แสดงว่า MySQL ทำงานอยู่ ถ้าสัญญาณไฟเป็นสีแดงก็แสดงว่า MySQL หยุดทำงาน โดยเราสามารถที่จะใช้ winmysqladmin ในการเปิดปิดโปรแกรมโปรแกรม MySQL ได้
8. ถ้าต้องการให้ MySQL หยุดทำงานก็ให้คลิ๊กขวาที่สัญญาณไฟเลือก Win NT -> Stop the Service
9. ถ้าต้องการให้ MySQL ทำงานก็ให้คลิ๊กขวาที่สัญญาณไฟเลือก Win NT -> Start the Service
10. ถ้าต้องการเปิดหน้าต่างของโปรแกรม mysqladmin ให้คลิ๊กขวาที่สัญญาณไฟแล้วเลือก Show me
Linux Settings DNS
Need to create a zone in /etc/named.conf. The zone name must match the domain name.
Additional sub-domains can go in the same zone, but other domains can not. For example, it is not possible to create a zone name of "example.com" then to put an "A record" of "somesite.com" in it. Instead, other domains must go in their own zone.
Here is an example zone.
zone "example.com" {
type master;
file "/var/named/joel/example.com.hosts";
};
Once you've created the zone, you need to create a zone file in /var/named. Of course, the filename must match the name you specified above. I group my zone files by username. In this case, I have a user named “joel”, so I create a directory called /var/named/joel and I create a zone file called example.com.hosts in that directory.
Usually, there will be other zone files you can copy from. Below is an example of what the
example.com.hosts file might look like.
$ttl 1800
example.com. IN SOA ns1.example.com.
admin.example.com. ( 1089054655
10800
3600
604800
1800 )
example.com. IN NS ns1.example.com.
example.com. IN NS ns2.example.com.
example.com. IN A 127.161.144.16
ns1.example.com. IN A 127.161.144.16
ns2.example.com. IN A 127.161.144.17
www.example.com. IN CNAME example.com.
mail.example.com. IN CNAME example.com.
example.com. IN MX 1 mail.example.com.
The lines that have “NS” in them show the name servers. In this case, there are two name servers doing DNS for example.com. Those are ns1.example.com and ns2.example.com.
The lines that have “A” in them are “A records”. These specify IP addresses for those domain names. So, example.com points to 127.161.144.16 (this is a fake example).
The lines that have “CNAME” in them are like shortcuts or links to A records. For example,
www.example.com is a CNAME of example.com. The A record for example.com points to
127.161.144.16, so www.example.com also points to 127.161.144.16.
Additional sub-domains can go in the same zone, but other domains can not. For example, it is not possible to create a zone name of "example.com" then to put an "A record" of "somesite.com" in it. Instead, other domains must go in their own zone.
Here is an example zone.
zone "example.com" {
type master;
file "/var/named/joel/example.com.hosts";
};
Once you've created the zone, you need to create a zone file in /var/named. Of course, the filename must match the name you specified above. I group my zone files by username. In this case, I have a user named “joel”, so I create a directory called /var/named/joel and I create a zone file called example.com.hosts in that directory.
Usually, there will be other zone files you can copy from. Below is an example of what the
example.com.hosts file might look like.
$ttl 1800
example.com. IN SOA ns1.example.com.
admin.example.com. ( 1089054655
10800
3600
604800
1800 )
example.com. IN NS ns1.example.com.
example.com. IN NS ns2.example.com.
example.com. IN A 127.161.144.16
ns1.example.com. IN A 127.161.144.16
ns2.example.com. IN A 127.161.144.17
www.example.com. IN CNAME example.com.
mail.example.com. IN CNAME example.com.
example.com. IN MX 1 mail.example.com.
The lines that have “NS” in them show the name servers. In this case, there are two name servers doing DNS for example.com. Those are ns1.example.com and ns2.example.com.
The lines that have “A” in them are “A records”. These specify IP addresses for those domain names. So, example.com points to 127.161.144.16 (this is a fake example).
The lines that have “CNAME” in them are like shortcuts or links to A records. For example,
www.example.com is a CNAME of example.com. The A record for example.com points to
127.161.144.16, so www.example.com also points to 127.161.144.16.
Linux Network commands
RESTART
# /etc/init.d/networking restart
SHOW ROUTING TABLE
# netstat -rn
# route –n
Add route
# route add -net 10.4.1.0/24 gw 192.168.200.247
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.200.240 0.0.0.0 255.255.255.240 U 0 0 0 eth0
10.4.1.0 192.168.200.247 255.255.255.0 UG 0 0 0 eth0
0.0.0.0 192.168.200.254 0.0.0.0 UG 0 0 0 eth0
SHOW SERVICE PORT
# netstat -tanp
# /etc/init.d/networking restart
SHOW ROUTING TABLE
# netstat -rn
# route –n
Add route
# route add -net 10.4.1.0/24 gw 192.168.200.247
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.200.240 0.0.0.0 255.255.255.240 U 0 0 0 eth0
10.4.1.0 192.168.200.247 255.255.255.0 UG 0 0 0 eth0
0.0.0.0 192.168.200.254 0.0.0.0 UG 0 0 0 eth0
SHOW SERVICE PORT
# netstat -tanp
คำสั่ง Linux for install software
rpm -ihv name.rpm Install the rpm package called name
rpm -Uhv name.rpm Upgrade the rpm package called name
rpm -e package Delete the rpm package called package
rpm -l package List the files in the package called package
rpm -ql package List the files and state the installed version of the package called package
rpm -i --force package Reinstall the rpm package called name having deleted parts of it (not
deleting using rpm -e)
tar -zxvf archive.tar.gz or
tar -zxvf archive.tgz
Decompress the files contained in the zipped and tarred archive called archive
./configure Execute the script preparing the installed files for compiling
rpm -Uhv name.rpm Upgrade the rpm package called name
rpm -e package Delete the rpm package called package
rpm -l package List the files in the package called package
rpm -ql package List the files and state the installed version of the package called package
rpm -i --force package Reinstall the rpm package called name having deleted parts of it (not
deleting using rpm -e)
tar -zxvf archive.tar.gz or
tar -zxvf archive.tgz
Decompress the files contained in the zipped and tarred archive called archive
./configure Execute the script preparing the installed files for compiling
Linux tips mounting file systems
mount -t iso9660 /dev/cdrom /mnt/cdrom
Mount the device cdrom and call it cdrom under the /mnt directory
mount -t msdos /dev/hdd/mnt/ddrive
Mount hard disk “d” as a msdos file system and call it ddrive under the /mnt directory
mount -t vfat /dev/hda1 /mnt/cdrive
Mount hard disk “a” as a VFAT file system and call it cdrive under the /mnt directory
umount /mnt/cdrom
Unmount the cdrom
Mount the device cdrom and call it cdrom under the /mnt directory
mount -t msdos /dev/hdd/mnt/ddrive
Mount hard disk “d” as a msdos file system and call it ddrive under the /mnt directory
mount -t vfat /dev/hda1 /mnt/cdrive
Mount hard disk “a” as a VFAT file system and call it cdrive under the /mnt directory
umount /mnt/cdrom
Unmount the cdrom
Linux tips Starting & Stopping
Starting & Stopping
shutdown -h now Shutdown the system now and do not reboot
halt Stop all processes - same as above
shutdown -r 5 Shutdown the system in 5 minutes and reboot
shutdown -r now Shutdown the system now and reboot
reboot Stop all processes and then reboot - same as above
startx Start the X system
shutdown -h now Shutdown the system now and do not reboot
halt Stop all processes - same as above
shutdown -r 5 Shutdown the system in 5 minutes and reboot
shutdown -r now Shutdown the system now and reboot
reboot Stop all processes and then reboot - same as above
startx Start the X system
เปรียบเทียบ คำสั่ง Linux กับ window (Files Translating Commands)
On the left, the DOS commands; on the right, their Linux counterpart.
ATTRIB: chmod
COPY: cp
DEL: rm
MOVE: mv
REN: mv
TYPE: more, less, cat
DOS Linux
============================================================
C:\GUIDO>ATTRIB +R FILE.TXT $ chmod 400 file.txt
C:\GUIDO>COPY JOE.TXT JOE.DOC $ cp joe.txt joe.doc
C:\GUIDO>COPY *.* TOTAL $ cat * > total
C:\GUIDO>COPY FRACTALS.DOC PRN $ lpr fractals.doc
C:\GUIDO>DEL TEMP $ rm temp
C:\GUIDO>DEL *.BAK $ rm *~
C:\GUIDO>MOVE PAPER.TXT TMP\ $ mv paper.txt tmp/
C:\GUIDO>REN PAPER.TXT PAPER.ASC $ mv paper.txt paper.asc
C:\GUIDO>PRINT LETTER.TXT $ lpr letter.txt
C:\GUIDO>TYPE LETTER.TXT $ more letter.txt
C:\GUIDO>TYPE LETTER.TXT $ less letter.txt
C:\GUIDO>TYPE LETTER.TXT > NUL $ cat letter.txt > /dev/null
ATTRIB: chmod
COPY: cp
DEL: rm
MOVE: mv
REN: mv
TYPE: more, less, cat
DOS Linux
============================================================
C:\GUIDO>ATTRIB +R FILE.TXT $ chmod 400 file.txt
C:\GUIDO>COPY JOE.TXT JOE.DOC $ cp joe.txt joe.doc
C:\GUIDO>COPY *.* TOTAL $ cat * > total
C:\GUIDO>COPY FRACTALS.DOC PRN $ lpr fractals.doc
C:\GUIDO>DEL TEMP $ rm temp
C:\GUIDO>DEL *.BAK $ rm *~
C:\GUIDO>MOVE PAPER.TXT TMP\ $ mv paper.txt tmp/
C:\GUIDO>REN PAPER.TXT PAPER.ASC $ mv paper.txt paper.asc
C:\GUIDO>PRINT LETTER.TXT $ lpr letter.txt
C:\GUIDO>TYPE LETTER.TXT $ more letter.txt
C:\GUIDO>TYPE LETTER.TXT $ less letter.txt
C:\GUIDO>TYPE LETTER.TXT > NUL $ cat letter.txt > /dev/null
Linux Firewalls mean
A firewall is a structure intended to keep a fire from spreading. Building have firewalls made of brick walls completely dividing sections of the building. In a car a firewall is the metal wall separating the engine and passenger compartments. Internet firewalls are intended to keep the flames of Internet hell out of your private LAN. Or, to keep the members of your LAN pure and chaste by denying them access the all the evil Internet temptations.
The first computer firewall was a non−routing Unix host with connections to two different networks. One network card connected to the Internet and the other to the private LAN. To reach the Internet from the private network, you had to logon to the firewall (Unix) server. You then used the resources of the system to access the Internet. For example, you could use X−windows to run Netscape's browser on the firewall system and have the display on your work station. With the browser running on the firewall it has access to both networks. This sort of dual homed system (a system with two network connections) is great if you can TRUST ALL of
your users. You can simple setup a Linux system and give an account accounts on it to everyone needing Internet access. With this setup, the only computer on your private network that knows anything about the outside world is the firewall. No one can download to their personal workstations. They must first download a file to the firewall and then download the file from the firewall to their workstation.
The first computer firewall was a non−routing Unix host with connections to two different networks. One network card connected to the Internet and the other to the private LAN. To reach the Internet from the private network, you had to logon to the firewall (Unix) server. You then used the resources of the system to access the Internet. For example, you could use X−windows to run Netscape's browser on the firewall system and have the display on your work station. With the browser running on the firewall it has access to both networks. This sort of dual homed system (a system with two network connections) is great if you can TRUST ALL of
your users. You can simple setup a Linux system and give an account accounts on it to everyone needing Internet access. With this setup, the only computer on your private network that knows anything about the outside world is the firewall. No one can download to their personal workstations. They must first download a file to the firewall and then download the file from the firewall to their workstation.
ubuntu ติด ตั้ง Domain Name Service (DNS)
Domain Name Service (DNS) is an Internet service that maps IP addresses and fully qualified domain names (FQDN) to one another. In this way, DNS alleviates the need to remember IP addresses. Computers that run DNS are called name servers. Ubuntu ships with BIND (Berkley Internet Naming daemon), the most common program used for maintaining a name server on Linux.
Installation
At a terminal prompt, enter the following command to install dns:
sudo apt-get install bind
Configuration
The DNS configuration files are stored in the /etc/bind directory. The primary configuration file is
/etc/bind/named.conf. The content of the default configuration file is shown below:
// This is the primary configuration file for the BIND DNS server named.
//
// Please read /usr/share/doc/bind/README.Debian for information on the
// structure of BIND configuration files in Debian for BIND versions 8.2.1
// and later, *BEFORE* you customize this configuration file.
//
include "/etc/bind/named.conf.options";
// reduce log verbosity on issues outside our control
logging {
category lame-servers { null; };
category cname { null; };
};
// prime the server with knowledge of the root servers
zone "." {
type hint;
file "/etc/bind/db.root";
};
// be authoritative for the localhost forward and reverse zones, and for
// broadcast zones as per RFC 1912
zone "localhost" {
type master;
file "/etc/bind/db.local";
};
zone "127.in-addr.arpa" {
type master;
file "/etc/bind/db.127";
};
zone "0.in-addr.arpa" {
type master;
file "/etc/bind/db.0";
};
zone "255.in-addr.arpa" {
type master;
file "/etc/bind/db.255";
};
// add local zone definitions here
include "/etc/bind/named.conf.local";
The include line specifies the filename which contains the DNS options. The directory line in the
options file tells DNS where to look for files. All files BIND uses will be relative to this directory.
The file named /etc/bind/db.root describes the root name servers in the world. The servers change over time, so the /etc/bind/db.root file must be maintained now and then.
The zone section defines a master server, and it is stored in a file mentioned against file tag. Every zone file contains 3 resource records (RRs): an SOA RR, an NS RR and a PTR RR. SOA is short of Start of Authority. The "@" is a special notation meaning the origin. NS is the Name Server RR. PTR is Domain Name Pointer. To start the DNS server, run the following command from a terminal prompt:
sudo /etc/init.d/bind start
Installation
At a terminal prompt, enter the following command to install dns:
sudo apt-get install bind
Configuration
The DNS configuration files are stored in the /etc/bind directory. The primary configuration file is
/etc/bind/named.conf. The content of the default configuration file is shown below:
// This is the primary configuration file for the BIND DNS server named.
//
// Please read /usr/share/doc/bind/README.Debian for information on the
// structure of BIND configuration files in Debian for BIND versions 8.2.1
// and later, *BEFORE* you customize this configuration file.
//
include "/etc/bind/named.conf.options";
// reduce log verbosity on issues outside our control
logging {
category lame-servers { null; };
category cname { null; };
};
// prime the server with knowledge of the root servers
zone "." {
type hint;
file "/etc/bind/db.root";
};
// be authoritative for the localhost forward and reverse zones, and for
// broadcast zones as per RFC 1912
zone "localhost" {
type master;
file "/etc/bind/db.local";
};
zone "127.in-addr.arpa" {
type master;
file "/etc/bind/db.127";
};
zone "0.in-addr.arpa" {
type master;
file "/etc/bind/db.0";
};
zone "255.in-addr.arpa" {
type master;
file "/etc/bind/db.255";
};
// add local zone definitions here
include "/etc/bind/named.conf.local";
The include line specifies the filename which contains the DNS options. The directory line in the
options file tells DNS where to look for files. All files BIND uses will be relative to this directory.
The file named /etc/bind/db.root describes the root name servers in the world. The servers change over time, so the /etc/bind/db.root file must be maintained now and then.
The zone section defines a master server, and it is stored in a file mentioned against file tag. Every zone file contains 3 resource records (RRs): an SOA RR, an NS RR and a PTR RR. SOA is short of Start of Authority. The "@" is a special notation meaning the origin. NS is the Name Server RR. PTR is Domain Name Pointer. To start the DNS server, run the following command from a terminal prompt:
sudo /etc/init.d/bind start
Setup Linux ipchains
# Firewall configuration
# Manual customization of this file is not recommended.
# Note: ifup-post will punch the current nameservers through the
# firewall; such entries will *not* be listed here.
:input ACCEPT
:forward ACCEPT
:output ACCEPT
-A input -s 0/0 -d 0/0 -i lo -j ACCEPT
-A forward -s 192.168.1.0/24 -d 0/0 -j MASQ
# Manual customization of this file is not recommended.
# Note: ifup-post will punch the current nameservers through the
# firewall; such entries will *not* be listed here.
:input ACCEPT
:forward ACCEPT
:output ACCEPT
-A input -s 0/0 -d 0/0 -i lo -j ACCEPT
-A forward -s 192.168.1.0/24 -d 0/0 -j MASQ
Linux password file edit
/etc/passwd
The file has one line per username, and is divided into seven colon-delimited fields:
1. Username.
2. Password, in an encrypted form.
3. Numeric user id.
4. Numeric group id.
5. Full name or other description of account. This is called gecos.
6. The user's home directory.
7. The user's login shell (program to run at login).
The file has one line per username, and is divided into seven colon-delimited fields:
1. Username.
2. Password, in an encrypted form.
3. Numeric user id.
4. Numeric group id.
5. Full name or other description of account. This is called gecos.
6. The user's home directory.
7. The user's login shell (program to run at login).
Linux Runtime level management
exit Terminates the shell.
halt Stop the system.
init Process control initialization.
initscript Script that executes inittab commands.
logout Log the user off the system.
poweroff Brings the system down.
reboot Reboot the system.
runlevel List the current and previous runlevel.
setsid Run a program in a new session.
shutdown If your system has many users, use the command "shutdown -h +time message", where time is the time in minutes until the system is halted, and message is a short explanation of why the system is shutting down.
# shutdown -h +10 'We will install a new disk. System should be back on-line in three
hours.'
telinit By requesting run level 1 a system can be taken to single user mode.
halt Stop the system.
init Process control initialization.
initscript Script that executes inittab commands.
logout Log the user off the system.
poweroff Brings the system down.
reboot Reboot the system.
runlevel List the current and previous runlevel.
setsid Run a program in a new session.
shutdown If your system has many users, use the command "shutdown -h +time message", where time is the time in minutes until the system is halted, and message is a short explanation of why the system is shutting down.
# shutdown -h +10 'We will install a new disk. System should be back on-line in three
hours.'
telinit By requesting run level 1 a system can be taken to single user mode.
Linux command: MC(midnight commander)
Description
The Midnight Commander is a directory browser/file manager for Unix-like operating systems
The screen of the Midnight Commander is divided into four parts. Almost all of the screen space is taken up by two directory panels. By default, the second bottommost line of the screen is the shell command line, and the bottom line shows the function key labels. The topmost line is the menu bar line. The menu bar line may not be visible, but appears if you click the topmost line with the mouse or press the F9 key.
The Midnight Commander provides a view of two directories at the same time. One of the panels is the current panel (a selection bar is in the current panel). Almost all operations take place on the current panel. Some file operations like Rename and Copy by default use the directory of the unselected panel as a destination (don't worry, they always ask you for confirmation first). For more information, see the sections on the directory panels the left and right menus and the file menu.
You can execute system commands from the Midnight Commander by simply typing them. Everything you type will appear on the shell command line, and when you press Enter the Midnight Commander will execute the command line you typed; read the shell command line and input line keys sections to learn more about the command line.
The Midnight Commander is a directory browser/file manager for Unix-like operating systems
The screen of the Midnight Commander is divided into four parts. Almost all of the screen space is taken up by two directory panels. By default, the second bottommost line of the screen is the shell command line, and the bottom line shows the function key labels. The topmost line is the menu bar line. The menu bar line may not be visible, but appears if you click the topmost line with the mouse or press the F9 key.
The Midnight Commander provides a view of two directories at the same time. One of the panels is the current panel (a selection bar is in the current panel). Almost all operations take place on the current panel. Some file operations like Rename and Copy by default use the directory of the unselected panel as a destination (don't worry, they always ask you for confirmation first). For more information, see the sections on the directory panels the left and right menus and the file menu.
You can execute system commands from the Midnight Commander by simply typing them. Everything you type will appear on the shell command line, and when you press Enter the Midnight Commander will execute the command line you typed; read the shell command line and input line keys sections to learn more about the command line.
คำสั่ง mount, umount
การใช้ mount เป็นช่องทางในการติดต่อกับอุปกรณ์ต่าง ๆ ถ้าต่อ harddisk เข้าไปในเครื่อง server เพิ่มอีก 1 ตัว ซึ่งระบบมอง harddisk ตัวที่เพิ่มเข้าไปเป็น hdc ด้วยคำสั่ง fdisk -l เมื่อต้องการ partition ที่ 1 ของ hdc มาเป็นห้อง /x ก็ใช้คำสั่งสร้างห้องคือ
#mkdir /x สำหรับครั้งแรก แล้วใช้คำสั่ง
#mount /dev/hdc1 /x ก็จะใช้ /x ซึ่งอยู่ใน harddisk อีกตัวหนึ่ง
คำสั่งที่เกี่ยวข้องกับคำสั่ง mount
#cat /etc/fstab : ดู file system table เพื่อบอกว่ามีอะไร mount ไว้แล้ว
#cat /etc/mtab : ดูรายละเอียดการ mount
#cat /proc/mounts : บอกว่ามีอะไร mount ไว้แล้ว
#cat /proc/partitions : บอกชื่อ และขนาดของแต่ละ partitions
#cat /proc/filesystems : บอกประเภทของ filesystems
#/sbin/fdisk -l : แสดง partition จาก harddisk ทุกตัวที่เชื่อมต่อในเครื่อง
#mkdir /x สำหรับครั้งแรก แล้วใช้คำสั่ง
#mount /dev/hdc1 /x ก็จะใช้ /x ซึ่งอยู่ใน harddisk อีกตัวหนึ่ง
คำสั่งที่เกี่ยวข้องกับคำสั่ง mount
#cat /etc/fstab : ดู file system table เพื่อบอกว่ามีอะไร mount ไว้แล้ว
#cat /etc/mtab : ดูรายละเอียดการ mount
#cat /proc/mounts : บอกว่ามีอะไร mount ไว้แล้ว
#cat /proc/partitions : บอกชื่อ และขนาดของแต่ละ partitions
#cat /proc/filesystems : บอกประเภทของ filesystems
#/sbin/fdisk -l : แสดง partition จาก harddisk ทุกตัวที่เชื่อมต่อในเครื่อง
mkdir make directories
mkdir - make directories
SYNOPSIS
mkdir [OPTION] DIRECTORY...
DESCRIPTION
Create the DIRECTORY(ies), if they do not already exist.
Mandatory arguments to long options are mandatory for short options too.
-Z, --context=CONTEXT (SELinux) set security context to CONTEXT
-m, --mode=MODE
set permission mode (as in chmod), not rwxrwxrwx - umask
-p, --parents
no error if existing, make parent directories as needed
-v, --verbose
print a message for each created directory
--help display this help and exit
--version
output version information and exit
AUTHOR
Written by David MacKenzie.
REPORTING BUGS
Report bugs to (bug-coreutils@gnu.org).
COPYRIGHT
Copyright .© 2004 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is
NO warranty; not even for MERCHANTABILITY or FITNESS FOR
A PARTICULAR PURPOSE.
SEE ALSO
The full documentation for mkdir is maintained as a Texinfo manual. If the info
and mkdir programs are properly installed at your site
SYNOPSIS
mkdir [OPTION] DIRECTORY...
DESCRIPTION
Create the DIRECTORY(ies), if they do not already exist.
Mandatory arguments to long options are mandatory for short options too.
-Z, --context=CONTEXT (SELinux) set security context to CONTEXT
-m, --mode=MODE
set permission mode (as in chmod), not rwxrwxrwx - umask
-p, --parents
no error if existing, make parent directories as needed
-v, --verbose
print a message for each created directory
--help display this help and exit
--version
output version information and exit
AUTHOR
Written by David MacKenzie.
REPORTING BUGS
Report bugs to (bug-coreutils@gnu.org).
COPYRIGHT
Copyright .© 2004 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is
NO warranty; not even for MERCHANTABILITY or FITNESS FOR
A PARTICULAR PURPOSE.
SEE ALSO
The full documentation for mkdir is maintained as a Texinfo manual. If the info
and mkdir programs are properly installed at your site
คำสั่ง Basic Linux Command
Basic Linux Command หรือ Unix เบื้องต้น ที่สำคัญมีดังนี้
· cat คำสั่งแสดงข้อมูลในไฟล์
· chmod คำสั่งกำหนดสิทธิ (permission) การใช้งานไฟล์หรือไดเร็คทอรี่
· cd คำสั่งเปลี่ยนไดเร็คทอรี่
· pwd คำสั่งแสดงไดเร็คทอรี่ปัจจุบัน
· cp คำสั่งสำเนาไฟล์
· echo คำสั่งแสดงข้อความ
· ls คำสั่งแสดงชื่อไฟล์และไดเร็คทอรี่
· more คำสั่งหยุดการแสดงผลทีละหน้า
· mkdir คำสั่งสร้างไดเร็คทอรี่
· mv คำสั่งเปลี่ยนชื่อไฟล์หรือไดเร็คทอรี่
· rm คำสั่งลบไฟล์
· rmdir คำสั่งลบไดเร็คทอรี่
· telnet คำสั่ง Remote Login
· ftp คำสั่งโอนย้ายข้อมูล
· vi คำสั่ง editor แบบจอภาพ
· pico คำสั่ง editor อย่างง่ายของ Unix
· df คำสั่งแสดงพื้นที่ disk ที่เหลือ
· du คำสั่งแสดงการใช้พื้นที่ disk หน่วยเป็นกิโลไบท์
· mount คำสั่งการติดต่อ File System
· umount คำสั่งยกเลิกการติดต่อ File System
· ifconfig network interface configuration tool
· netstat คำสั่งดูข้อมูลบน network interfaces
· rpm คำสั่งการจัดการโปรแกรม ของ Red Hat (Red Hat’s Package Manager)
· cat คำสั่งแสดงข้อมูลในไฟล์
· chmod คำสั่งกำหนดสิทธิ (permission) การใช้งานไฟล์หรือไดเร็คทอรี่
· cd คำสั่งเปลี่ยนไดเร็คทอรี่
· pwd คำสั่งแสดงไดเร็คทอรี่ปัจจุบัน
· cp คำสั่งสำเนาไฟล์
· echo คำสั่งแสดงข้อความ
· ls คำสั่งแสดงชื่อไฟล์และไดเร็คทอรี่
· more คำสั่งหยุดการแสดงผลทีละหน้า
· mkdir คำสั่งสร้างไดเร็คทอรี่
· mv คำสั่งเปลี่ยนชื่อไฟล์หรือไดเร็คทอรี่
· rm คำสั่งลบไฟล์
· rmdir คำสั่งลบไดเร็คทอรี่
· telnet คำสั่ง Remote Login
· ftp คำสั่งโอนย้ายข้อมูล
· vi คำสั่ง editor แบบจอภาพ
· pico คำสั่ง editor อย่างง่ายของ Unix
· df คำสั่งแสดงพื้นที่ disk ที่เหลือ
· du คำสั่งแสดงการใช้พื้นที่ disk หน่วยเป็นกิโลไบท์
· mount คำสั่งการติดต่อ File System
· umount คำสั่งยกเลิกการติดต่อ File System
· ifconfig network interface configuration tool
· netstat คำสั่งดูข้อมูลบน network interfaces
· rpm คำสั่งการจัดการโปรแกรม ของ Red Hat (Red Hat’s Package Manager)
เปลี่ยน login จากกราฟิกเป็นเท็กซ์ mode
แก้ไขไฟล์ /etc/inittab ดังตัวอย่าง# pipo /etc/inittab
# inittab This file describes how the INIT process should set up
# the system in a certain run-level.
#
# Modified for RHS Linux by Marc Ewing and Donnie Barnes#
# Default runlevel. The runlevels used by RHS are:# 0 - halt (Do NOT set #initdefault to this)
# 1 - Single user mode# 2 - Multiuser, without NFS (The same as 3, if you do not #have networking)
# 3 - Full multiuser mode# 4 - unused# 5 - X11# 6 - reboot (Do NOT set #initdefault to this)
id:3:initdefault:
บรรทัดสุดท้ายจากเดิมเป็นเลข 5 จะเป็นการ login แบบกราฟิก เปลี่ยนเป็นเลข 3
# inittab This file describes how the INIT process should set up
# the system in a certain run-level.
#
# Modified for RHS Linux by Marc Ewing and Donnie Barnes#
# Default runlevel. The runlevels used by RHS are:# 0 - halt (Do NOT set #initdefault to this)
# 1 - Single user mode# 2 - Multiuser, without NFS (The same as 3, if you do not #have networking)
# 3 - Full multiuser mode# 4 - unused# 5 - X11# 6 - reboot (Do NOT set #initdefault to this)
id:3:initdefault:
บรรทัดสุดท้ายจากเดิมเป็นเลข 5 จะเป็นการ login แบบกราฟิก เปลี่ยนเป็นเลข 3
ติดตั้ง Web Application ด้วย Apache, PHP, MySQL และ phpMyAdmin
ทำการ Download ใช้ในการพัฒนาเป็น Web Application ซึ่งประกอบด้วยไฟล์ดังนี้
1 apache_2.2.9-win32-x86-openssl-0.9.8h-r2.msi
2 mysql-shareware-3.22.34-win.zip
3 php-5.2.6-Win32.zip
4 phpMyAdmin 2.11.8.1
Apache
download ได้ที่ http://www.apache.org/
ขั้นตอนการติดตั้ง Apache เพื่อทำเป็น Web Server
1. ดับเบิลคลิกที่โฟลเดอร์ apache
2. ทำการติดตั้ง Apache โดยคลิกที่ไฟล์ apache_2.2.9-win32-x86-openssl-0.9.8h-r2.msi
3. คลิกเลือก I accept the terms in the license agreement แล้วกดปุ่ม Next
4. กำหนด Domain name และ Server name ให้ระบุเป็น localhost และระบุ E-mail Address เลือก Option “Run as service for All Users” แล้วกดปุ่ม Next
5. กดปุ่ม Next จนเสร็จขั้นตอนการติดตั้ง กดปุ่ม Finish
PHP
download ได้ที่เว็บไซต์ http://www.php.net/downloads.php
ขั้นตอนการติดตั้ง PHP
1. ดับเบิลคลิก โฟลเดอร์ C:\php และคัดลอกไฟล์ “php4ts.dll” ไปยัง C:\Windows\system32\
2. ดับเบิลคลิกโฟลเดอร์ C:\php และ คัดลอกไฟล์ “php.ini-dist” ไปยัง Directory C:\Windows และ ให้ Rename ชื่อไฟล์ เป็น “php.ini”
3. ทำการกำหนด Config ที่ไฟล์ php.ini
doc_root = “C:\Apache\htdocs”
extension_dir = “C:\php\extensions”
MySQL
download ได้ที่เว็บไซต์ http://www.mysql.com/downloads/
ขั้นตอนการติดตั้ง MySQL
1. ดับเบิลคลิก ไฟล์ SETUP เลือกการติดตั้ง เป็นแบบ Typical เลือก Drive C:
2. เมื่อติดตั้งเสร็จจะอยู่ที่ C:\mysql
phpMyAdmin
download ได้ที่เว็บไซต์ http://www.phpmyadmin.net/home_page/index.php
phpMyAdmin เป็นเครื่องมือที่ช่วยในการดูแลและบริหารจัดการฐานข้อมูล MySQL ผ่าน web
ขั้นตอนการติดตั้ง phpMyAdmin
ติดตั้งไปไว้ที่โฟลเดอร์ C:\Apache\htdocs
1 apache_2.2.9-win32-x86-openssl-0.9.8h-r2.msi
2 mysql-shareware-3.22.34-win.zip
3 php-5.2.6-Win32.zip
4 phpMyAdmin 2.11.8.1
Apache
download ได้ที่ http://www.apache.org/
ขั้นตอนการติดตั้ง Apache เพื่อทำเป็น Web Server
1. ดับเบิลคลิกที่โฟลเดอร์ apache
2. ทำการติดตั้ง Apache โดยคลิกที่ไฟล์ apache_2.2.9-win32-x86-openssl-0.9.8h-r2.msi
3. คลิกเลือก I accept the terms in the license agreement แล้วกดปุ่ม Next
4. กำหนด Domain name และ Server name ให้ระบุเป็น localhost และระบุ E-mail Address เลือก Option “Run as service for All Users” แล้วกดปุ่ม Next
5. กดปุ่ม Next จนเสร็จขั้นตอนการติดตั้ง กดปุ่ม Finish
PHP
download ได้ที่เว็บไซต์ http://www.php.net/downloads.php
ขั้นตอนการติดตั้ง PHP
1. ดับเบิลคลิก โฟลเดอร์ C:\php และคัดลอกไฟล์ “php4ts.dll” ไปยัง C:\Windows\system32\
2. ดับเบิลคลิกโฟลเดอร์ C:\php และ คัดลอกไฟล์ “php.ini-dist” ไปยัง Directory C:\Windows และ ให้ Rename ชื่อไฟล์ เป็น “php.ini”
3. ทำการกำหนด Config ที่ไฟล์ php.ini
doc_root = “C:\Apache\htdocs”
extension_dir = “C:\php\extensions”
MySQL
download ได้ที่เว็บไซต์ http://www.mysql.com/downloads/
ขั้นตอนการติดตั้ง MySQL
1. ดับเบิลคลิก ไฟล์ SETUP เลือกการติดตั้ง เป็นแบบ Typical เลือก Drive C:
2. เมื่อติดตั้งเสร็จจะอยู่ที่ C:\mysql
phpMyAdmin
download ได้ที่เว็บไซต์ http://www.phpmyadmin.net/home_page/index.php
phpMyAdmin เป็นเครื่องมือที่ช่วยในการดูแลและบริหารจัดการฐานข้อมูล MySQL ผ่าน web
ขั้นตอนการติดตั้ง phpMyAdmin
ติดตั้งไปไว้ที่โฟลเดอร์ C:\Apache\htdocs
แก้ ลืม รหัสผ่าน root
สําหรับ LILO
1. Restart เครื่องให้แล้วกด Alt-X ขณะที่อยู่ที่ LILO prompt แล้วพิมพ์ linux single
2. เมื่อเข้าจะได้สถานะเป็น root แล้วพิมพ์คําสั่ง passwd root แล้วก็ใส่ password แล้ว reboot เป็นอันว่าเรียบร้อย
สําหรับ GRUB
1. Restart เครื่อง ขณะที่อยู่ที่ GRUB prompt ให้กด e แล้วพิมพ์ single ตามหลังคําสั่ง boot
2. เมื่อเข้าจะได้สถานะเป็น root แล้วพิมพ์คําสั่ง passwd root แล้วก็ใส่ password แล้ว reboot เป็นอันว่าเรียบร้อย
1. Restart เครื่องให้แล้วกด Alt-X ขณะที่อยู่ที่ LILO prompt แล้วพิมพ์ linux single
2. เมื่อเข้าจะได้สถานะเป็น root แล้วพิมพ์คําสั่ง passwd root แล้วก็ใส่ password แล้ว reboot เป็นอันว่าเรียบร้อย
สําหรับ GRUB
1. Restart เครื่อง ขณะที่อยู่ที่ GRUB prompt ให้กด e แล้วพิมพ์ single ตามหลังคําสั่ง boot
2. เมื่อเข้าจะได้สถานะเป็น root แล้วพิมพ์คําสั่ง passwd root แล้วก็ใส่ password แล้ว reboot เป็นอันว่าเรียบร้อย
การเข้าพาร์ติชันของวินโดว์ ผ่าน linux
ลีนุกซ์ติดต่อกับอุปกรณ์ต่างๆ จะผ่านไฟล์ทั้งหมด เรียกว่า ไฟล์ดีไวซ์ ซึ่งจะมีชื่อแตกต่างกัน ในแต่ละอุปกรณ์ สำหรับฮาร์ดดิสก์ ชื่อของไฟล์ดีไวซ์จะจำแนกตามประเภทของฮาร์ดดิสก์ ตำแหน่งของฮาร์ดดิสก์ และหมายเลขพาร์ติชัน ดังตัวอย่าง ถ้าเป็นพาร์ติชันแรก ฮาร์ดดิสก์ตัวแรก และประเภท IDE ไฟล์ดีไวซ์จะชื่อ /dev/hda1 และในลักษณะเดียวกัน ถ้าเป็นฮาร์ดดิสก์ประเภท SCSI ไฟล์ดีไวซ์จะชื่อ /dev/sda1 เป็นต้นถ้าคุณติดตั้งวินโดว์ไว้ที่ดีไวซ์ /dev/hda1 ก่อนที่จะติดต่อกับดีไวซ์ จะต้องเมาท์ดีไวซ์เข้ามาในระบบก่อน ก่อนอื่นๆ คุณต้องสร้างไดเร็กทอรีปลายทางที่ต้องการเมาท์ดีไวซ์เข้ามาก่อน ด้วยคำสั่ง mkdir ดังนี้
# mkdir /mnt/vfat
จากนั้นทำการเมาท์พาร์ติชันวินโดว์เข้ามาในระบบ ด้วยคำสั่ง mount ดังนี้
# mount -t vfat /dev/hda1 /mnt/vfat
คำสั่ง mount จะเหมือนกับการต่อกิ่งของต้นไม่นั้นเอง เมื่อต่อติดแล้ว น้ำ และอาหารก็สามารถไหลผ่านไปเลี้ยงส่วนต่างๆ ของกิ่งได้ เช่นเดียวกัน เมื่อเราเมาท์พาร์ติชันของวินโดว์เข้ามาในระบบไฟล์ของลีนุกซ์ ภายใต้ไดเร็กทอรี /mnt/vfat ถ้าเราลองใช้คำสั่ง ls /mnt/vfat จะเห็นไดเร็กทอรี และไฟล์ของวินโดว์ ต่อไปคุณก็สามารถจัดการกับไฟล์บนวินโดว์ โดยมองเหมือนกับว่าเป็นไฟล์ และไดเร็กทอรีบนระบบเดียวกันในกรณีที่คุณต้องการเมาท์พาร์ติชันวินโดว์ เข้ามาในระบบทุกครั้งที่เปิดเครื่อง คุณต้องไปแก้ไขไฟล์ /etc/fstab โดยใช้โปรแกรม editor ดังนี้
# pico /etc/fstab
ให้เพิ่มบรรทัดคำสั่งดังนี้/dev/hda1 /mnt/vfat vfat noauto,owner,users 0 0กด [Ctrl]-[x] และคีย์ y เพื่อบันทึกข้อมูลลงไฟล์
# mkdir /mnt/vfat
จากนั้นทำการเมาท์พาร์ติชันวินโดว์เข้ามาในระบบ ด้วยคำสั่ง mount ดังนี้
# mount -t vfat /dev/hda1 /mnt/vfat
คำสั่ง mount จะเหมือนกับการต่อกิ่งของต้นไม่นั้นเอง เมื่อต่อติดแล้ว น้ำ และอาหารก็สามารถไหลผ่านไปเลี้ยงส่วนต่างๆ ของกิ่งได้ เช่นเดียวกัน เมื่อเราเมาท์พาร์ติชันของวินโดว์เข้ามาในระบบไฟล์ของลีนุกซ์ ภายใต้ไดเร็กทอรี /mnt/vfat ถ้าเราลองใช้คำสั่ง ls /mnt/vfat จะเห็นไดเร็กทอรี และไฟล์ของวินโดว์ ต่อไปคุณก็สามารถจัดการกับไฟล์บนวินโดว์ โดยมองเหมือนกับว่าเป็นไฟล์ และไดเร็กทอรีบนระบบเดียวกันในกรณีที่คุณต้องการเมาท์พาร์ติชันวินโดว์ เข้ามาในระบบทุกครั้งที่เปิดเครื่อง คุณต้องไปแก้ไขไฟล์ /etc/fstab โดยใช้โปรแกรม editor ดังนี้
# pico /etc/fstab
ให้เพิ่มบรรทัดคำสั่งดังนี้/dev/hda1 /mnt/vfat vfat noauto,owner,users 0 0กด [Ctrl]-[x] และคีย์ y เพื่อบันทึกข้อมูลลงไฟล์
Firewall configuration
# Firewall configuration
# Manual customization of this file is not recommended.
# Note: ifup-post will punch the current nameservers through the
# firewall; such entries will *not* be listed here.
:input ACCEPT
:forward ACCEPT
:output ACCEPT
-A input -s 0/0 -d 0/0 -i lo -j ACCEPT
-A forward -s 192.168.1.0/24 -d 0/0 -j MASQ
# Manual customization of this file is not recommended.
# Note: ifup-post will punch the current nameservers through the
# firewall; such entries will *not* be listed here.
:input ACCEPT
:forward ACCEPT
:output ACCEPT
-A input -s 0/0 -d 0/0 -i lo -j ACCEPT
-A forward -s 192.168.1.0/24 -d 0/0 -j MASQ
ติดตั้ง squid.conf ใน linux
# WELCOME TO SQUID 2
# ------------------
#
# This is the default Squid configuration file. You may wish
# to look at the Squid home page (http://www.squid-cache.org/)
# for the FAQ and other documentation.
#
# The default Squid config file shows what the defaults for
# various options happen to be. If you don't need to change the
# default, you shouldn't uncomment the line. Doing so may cause
# run-time problems. In some cases "none" refers to no default
# setting at all, while in other cases it refers to a valid
# option - the comments for that keyword indicate if this is the
# case.
#
# NETWORK OPTIONS
# -----------------------------------------------------------------------------
# TAG: http_port
# Usage: port
# hostname:port
# 1.2.3.4:port
#
# The socket addresses where Squid will listen for HTTP client
# requests. You may specify multiple socket addresses.
# There are three forms: port alone, hostname with port, and
# IP address with port. If you specify a hostname or IP
# address, then Squid binds the socket to that specific
# address. This replaces the old 'tcp_incoming_address'
# option. Most likely, you do not need to bind to a specific
# address, so you can use the port number alone.
#
# The default port number is 3128.
#
# If you are running Squid in accelerator mode, then you
# probably want to listen on port 80 also, or instead.
#
# The -a command line option will override the *first* port
# number listed here. That option will NOT override an IP
# address, however.
#
# You may specify multiple socket addresses on multiple lines.
#
#Default:
# http_port 3128
http_port 8080
# TAG: icp_port
# The port number where Squid sends and receives ICP queries to
# and from neighbor caches. Default is 3130. To disable use
# "0". May be overridden with -u on the command line.
#
#Default:
# icp_port 3130
icp_port 3130
# TAG: htcp_port
# Note: This option is only available if Squid is rebuilt with the
# --enable-htcp option
#
# The port number where Squid sends and receives HTCP queries to
# and from neighbor caches. Default is 4827. To disable use
# "0".
#
# To enable this option, you must use --enable-htcp with the
# configure script.
#
#Default:
# htcp_port 4827
# TAG: mcast_groups
# This tag specifies a list of multicast groups which your server
# should join to receive multicasted ICP queries.
#
# NOTE! Be very careful what you put here! Be sure you
# understand the difference between an ICP _query_ and an ICP
# _reply_. This option is to be set only if you want to RECEIVE
# multicast queries. Do NOT set this option to SEND multicast
# ICP (use cache_peer for that). ICP replies are always sent via
# unicast, so this option does not affect whether or not you will
# receive replies from multicast group members.
#
# You must be very careful to NOT use a multicast address which
# is already in use by another group of caches.
#
# If you are unsure about multicast, please read the Multicast
# chapter in the Squid FAQ (http://www.squid-cache.org/FAQ/).
#
# Usage: mcast_groups 239.128.16.128 224.0.1.20
#
# By default, Squid doesn't listen on any multicast groups.
#
#Default:
# none
# TAG: tcp_outgoing_address
# TAG: udp_incoming_address
# TAG: udp_outgoing_address
# Usage: tcp_incoming_address 10.20.30.40
# udp_outgoing_address fully.qualified.domain.name
#
# tcp_outgoing_address is used for connections made to remote
# servers and other caches.
# udp_incoming_address is used for the ICP socket receiving packets
# from other caches.
# udp_outgoing_address is used for ICP packets sent out to other
# caches.
#
# The default behavior is to not bind to any specific address.
#
# A *_incoming_address value of 0.0.0.0 indicates that Squid should
# listen on all available interfaces.
#
# If udp_outgoing_address is set to 255.255.255.255 (the default)
# then it will use the same socket as udp_incoming_address. Only
# change this if you want to have ICP queries sent using another
# address than where this Squid listens for ICP queries from other
# caches.
#
# NOTE, udp_incoming_address and udp_outgoing_address can not
# have the same value since they both use port 3130.
#
# NOTE, tcp_incoming_address has been removed. You can now
# specify IP addresses on the 'http_port' line.
#
#Default:
# tcp_outgoing_address 255.255.255.255
# udp_incoming_address 0.0.0.0
# udp_outgoing_address 255.255.255.255
# OPTIONS WHICH AFFECT THE NEIGHBOR SELECTION ALGORITHM
# -----------------------------------------------------------------------------
# TAG: cache_peer
# To specify other caches in a hierarchy, use the format:
#
# cache_peer hostname type http_port icp_port
#
# For example,
#
# # proxy icp
# # hostname type port port options
# # -------------------- -------- ----- ----- -----------
# cache_peer parent.foo.net parent 3128 3130 [proxy-only]
# cache_peer sib1.foo.net sibling 3128 3130 [proxy-only]
# cache_peer sib2.foo.net sibling 3128 3130 [proxy-only]
#
# type: either 'parent', 'sibling', or 'multicast'.
#
# proxy_port: The port number where the cache listens for proxy
# requests.
#
# icp_port: Used for querying neighbor caches about
# objects. To have a non-ICP neighbor
# specify '7' for the ICP port and make sure the
# neighbor machine has the UDP echo port
# enabled in its /etc/inetd.conf file.
#
# options: proxy-only
# weight=n
# ttl=n
# no-query
# default
# round-robin
# multicast-responder
# closest-only
# no-digest
# no-netdb-exchange
# no-delay
# login=user:password
# connect-timeout=nn
# digest-url=url
# allow-miss
#
# use 'proxy-only' to specify that objects fetched
# from this cache should not be saved locally.
#
# use 'weight=n' to specify a weighted parent.
# The weight must be an integer. The default weight
# is 1, larger weights are favored more.
#
# use 'ttl=n' to specify a IP multicast TTL to use
# when sending an ICP queries to this address.
# Only useful when sending to a multicast group.
# Because we don't accept ICP replies from random
# hosts, you must configure other group members as
# peers with the 'multicast-responder' option below.
#
# use 'no-query' to NOT send ICP queries to this
# neighbor.
#
# use 'default' if this is a parent cache which can
# be used as a "last-resort." You should probably
# only use 'default' in situations where you cannot
# use ICP with your parent cache(s).
#
# use 'round-robin' to define a set of parents which
# should be used in a round-robin fashion in the
# absence of any ICP queries.
#
# 'multicast-responder' indicates that the named peer
# is a member of a multicast group. ICP queries will
# not be sent directly to the peer, but ICP replies
# will be accepted from it.
#
# 'closest-only' indicates that, for ICP_OP_MISS
# replies, we'll only forward CLOSEST_PARENT_MISSes
# and never FIRST_PARENT_MISSes.
#
# use 'no-digest' to NOT request cache digests from
# this neighbor.
#
# 'no-netdb-exchange' disables requesting ICMP
# RTT database (NetDB) from the neighbor.
#
# use 'no-delay' to prevent access to this neighbor
# from influencing the delay pools.
#
# use 'login=user:password' if this is a personal/workgroup
# proxy and your parent requires proxy authentication.
#
# use 'connect-timeout=nn' to specify a peer
# specific connect timeout (also see the
# peer_connect_timeout directive)
#
# use 'digest-url=url' to tell Squid to fetch the cache
# digest (if digests are enabled) for this host from
# the specified URL rather than the Squid default
# location.
#
# use 'allow-miss' to disable Squid's use of only-if-cached
# when forwarding requests to siblings. This is primarily
# useful when icp_hit_stale is used by the sibling. To
# extensive use of this option may result in forwarding
# loops, and you should avoid having two-way peerings
# with this option. (for example to deny peer usage on
# requests from peer by denying cache_peer_access if the
# source is a peer)
#
# NOTE: non-ICP neighbors must be specified as 'parent'.
#
#Default:
# none
# TAG: cache_peer_domain
# Use to limit the domains for which a neighbor cache will be
# queried. Usage:
#
# cache_peer_domain cache-host domain [domain ...]
# cache_peer_domain cache-host !domain
#
# For example, specifying
#
# cache_peer_domain parent.foo.net .edu
#
# has the effect such that UDP query packets are sent to
# 'bigserver' only when the requested object exists on a
# server in the .edu domain. Prefixing the domainname
# with '!' means that the cache will be queried for objects
# NOT in that domain.
#
# NOTE: * Any number of domains may be given for a cache-host,
# either on the same or separate lines.
# * When multiple domains are given for a particular
# cache-host, the first matched domain is applied.
# * Cache hosts with no domain restrictions are queried
# for all requests.
# * There are no defaults.
# * There is also a 'cache_peer_access' tag in the ACL
# section.
#
#Default:
# none
# TAG: neighbor_type_domain
# usage: neighbor_type_domain parentsibling domain domain ...
#
# Modifying the neighbor type for specific domains is now
# possible. You can treat some domains differently than the the
# default neighbor type specified on the 'cache_peer' line.
# Normally it should only be necessary to list domains which
# should be treated differently because the default neighbor type
# applies for hostnames which do not match domains listed here.
#
#EXAMPLE:
# cache_peer parent cache.foo.org 3128 3130
# neighbor_type_domain cache.foo.org sibling .com .net
# neighbor_type_domain cache.foo.org sibling .au .de
#
#Default:
# none
# TAG: icp_query_timeout (msec)
# Normally Squid will automatically determine an optimal ICP
# query timeout value based on the round-trip-time of recent ICP
# queries. If you want to override the value determined by
# Squid, set this 'icp_query_timeout' to a non-zero value. This
# value is specified in MILLISECONDS, so, to use a 2-second
# timeout (the old default), you would write:
#
# icp_query_timeout 2000
#
#Default:
# icp_query_timeout 0
# TAG: maximum_icp_query_timeout (msec)
# Normally the ICP query timeout is determined dynamically. But
# sometimes it can lead to very large values (say 5 seconds).
# Use this option to put an upper limit on the dynamic timeout
# value. Do NOT use this option to always use a fixed (instead
# of a dynamic) timeout value. To set a fixed timeout see the
# 'icp_query_timeout' directive.
#
#Default:
# maximum_icp_query_timeout 2000
# TAG: mcast_icp_query_timeout (msec)
# For Multicast peers, Squid regularly sends out ICP "probes" to
# count how many other peers are listening on the given multicast
# address. This value specifies how long Squid should wait to
# count all the replies. The default is 2000 msec, or 2
# seconds.
#
#Default:
# mcast_icp_query_timeout 2000
# TAG: dead_peer_timeout (seconds)
# This controls how long Squid waits to declare a peer cache
# as "dead." If there are no ICP replies received in this
# amount of time, Squid will declare the peer dead and not
# expect to receive any further ICP replies. However, it
# continues to send ICP queries, and will mark the peer as
# alive upon receipt of the first subsequent ICP reply.
#
# This timeout also affects when Squid expects to receive ICP
# replies from peers. If more than 'dead_peer' seconds have
# passed since the last ICP reply was received, Squid will not
# expect to receive an ICP reply on the next query. Thus, if
# your time between requests is greater than this timeout, you
# will see a lot of requests sent DIRECT to origin servers
# instead of to your parents.
#
#Default:
# dead_peer_timeout 10 seconds
# TAG: hierarchy_stoplist
# A list of words which, if found in a URL, cause the object to
# be handled directly by this cache. In other words, use this
# to not query neighbor caches for certain objects. You may
# list this option multiple times.
#
#We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?
# TAG: no_cache
# A list of ACL elements which, if matched, cause the reply to
# immediately removed from the cache. In other words, use this
# to force certain objects to never be cached.
#
# You must use the word 'DENY' to indicate the ACL names which should
# NOT be cached.
#
#We recommend you to use the following two lines.
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
# OPTIONS WHICH AFFECT THE CACHE SIZE
# -----------------------------------------------------------------------------
# TAG: cache_mem (bytes)
# NOTE: THIS PARAMETER DOES NOT SPECIFY THE MAXIMUM PROCESS
# SIZE. IT PLACES A LIMIT ON ONE ASPECT OF SQUID'S MEMORY
# USAGE. SQUID USES MEMORY FOR OTHER THINGS AS WELL.
# YOUR PROCESS WILL PROBABLY BECOME TWICE OR THREE TIMES
# BIGGER THAN THE VALUE YOU PUT HERE
#
# 'cache_mem' specifies the ideal amount of memory to be used
# for:
# * In-Transit objects
# * Hot Objects
# * Negative-Cached objects
#
# Data for these objects are stored in 4 KB blocks. This
# parameter specifies the ideal upper limit on the total size of
# 4 KB blocks allocated. In-Transit objects take the highest
# priority.
#
# In-transit objects have priority over the others. When
# additional space is needed for incoming data, negative-cached
# and hot objects will be released. In other words, the
# negative-cached and hot objects will fill up any unused space
# not needed for in-transit objects.
#
# If circumstances require, this limit will be exceeded.
# Specifically, if your incoming request rate requires more than
# 'cache_mem' of memory to hold in-transit objects, Squid will
# exceed this limit to satisfy the new requests. When the load
# decreases, blocks will be freed until the high-water mark is
# reached. Thereafter, blocks will be used to store hot
# objects.
#
#Default:
# cache_mem 8 MB
cache_mem 128 MB
# TAG: cache_swap_low (percent, 0-100)
# TAG: cache_swap_high (percent, 0-100)
#
# The low- and high-water marks for cache object replacement.
# Replacement begins when the swap (disk) usage is above the
# low-water mark and attempts to maintain utilization near the
# low-water mark. As swap utilization gets close to high-water
# mark object eviction becomes more aggressive. If utilization is
# close to the low-water mark less replacement is done each time.
#
# Defaults are 90% and 95%. If you have a large cache, 5% could be
# hundreds of MB. If this is the case you may wish to set these
# numbers closer together.
#
#Default:
# cache_swap_low 90
# cache_swap_high 95
# TAG: maximum_object_size (bytes)
# Objects larger than this size will NOT be saved on disk. The
# value is specified in kilobytes, and the default is 4MB. If
# you wish to get a high BYTES hit ratio, you should probably
# increase this (one 32 MB object hit counts for 3200 10KB
# hits). If you wish to increase speed more than your want to
# save bandwidth you should leave this low.
#
# NOTE: if using the LFUDA replacement policy you should increase
# this value to maximize the byte hit rate improvement of LFUDA!
# See replacement_policy below for a discussion of this policy.
#
#Default:
# maximum_object_size 4096 KB
# TAG: minimum_object_size (bytes)
# Objects smaller than this size will NOT be saved on disk. The
# value is specified in kilobytes, and the default is 0 KB, which
# means there is no minimum.
#
#Default:
# minimum_object_size 0 KB
# TAG: maximum_object_size_in_memory (bytes)
# Objects greater than this size will not be attempted to kept in
# the memory cache. This should be set high enough to keep objects
# accessed frequently in memory to improve performance whilst low
# enough to keep larger objects from hoarding cache_mem .
#
#Default:
# maximum_object_size_in_memory 8 KB
# TAG: ipcache_size (number of entries)
# TAG: ipcache_low (percent)
# TAG: ipcache_high (percent)
# The size, low-, and high-water marks for the IP cache.
#
#Default:
# ipcache_size 1024
# ipcache_low 90
# ipcache_high 95
# TAG: fqdncache_size (number of entries)
# Maximum number of FQDN cache entries.
#
#Default:
# fqdncache_size 1024
# TAG: cache_replacement_policy
# The cache replacement policy parameter determines which
# objects are evicted (replaced) when disk space is needed.
#
# lru : Squid's original list based LRU policy
# heap GDSF : Greedy-Dual Size Frequency
# heap LFUDA: Least Frequently Used with Dynamic Aging
# heap LRU : LRU policy implemented using a heap
#
# Applies to any cache_dir lines listed below this.
#
# The LRU policies keeps recently referenced objects.
#
# The heap GDSF policy optimizes object hit rate by keeping smaller
# popular objects in cache so it has a better chance of getting a
# hit. It achieves a lower byte hit rate than LFUDA though since
# it evicts larger (possibly popular) objects.
#
# The heap LFUDA policy keeps popular objects in cache regardless of
# their size and thus optimizes byte hit rate at the expense of
# hit rate since one large, popular object will prevent many
# smaller, slightly less popular objects from being cached.
#
# Both policies utilize a dynamic aging mechanism that prevents
# cache pollution that can otherwise occur with frequency-based
# replacement policies.
#
# NOTE: if using the LFUDA replacement policy you should increase
# the value of maximum_object_size above its default of 4096 KB to
# to maximize the potential byte hit rate improvement of LFUDA.
#
# For more information about the GDSF and LFUDA cache replacement
# policies see http://www.hpl.hp.com/techreports/1999/HPL-1999-69.html
# and http://fog.hpl.external.hp.com/techreports/98/HPL-98-173.html.
#
#Default:
# cache_replacement_policy lru
cache_replacement_policy heap GDSF
# TAG: memory_replacement_policy
# The memory replacement policy parameter determines which
# objects are purged from memory when memory space is needed.
#
# See cache_replacement_policy for details.
#
#Default:
# memory_replacement_policy lru
memory_replacement_policy heap GDSF
# LOGFILE PATHNAMES AND CACHE DIRECTORIES
# -----------------------------------------------------------------------------
# TAG: cache_dir
# Usage:
#
# cache_dir Type Directory-Name Fs-specific-data [options]
#
# You can specify multiple cache_dir lines to spread the
# cache among different disk partitions.
#
# Type specifies the kind of storage system to use. Most
# everyone will want to use "ufs" as the type. If you are using
# Async I/O (--enable async-io) on Linux or Solaris, then you may
# want to try "aufs" as the type. Async IO support may be
# buggy, however, so beware.
#
# 'Directory' is a top-level directory where cache swap
# files will be stored. If you want to use an entire disk
# for caching, then this can be the mount-point directory.
# The directory must exist and be writable by the Squid
# process. Squid will NOT create this directory for you.
#
# The ufs store type:
#
# "ufs" is the old well-known Squid storage format that has always
# been there.
#
# cache_dir ufs Directory-Name Mbytes L1 L2 [options]
#
# 'Mbytes' is the amount of disk space (MB) to use under this
# directory. The default is 100 MB. Change this to suit your
# configuration.
#
# 'Level-1' is the number of first-level subdirectories which
# will be created under the 'Directory'. The default is 16.
#
# 'Level-2' is the number of second-level subdirectories which
# will be created under each first-level directory. The default
# is 256.
#
# The aufs store type:
#
# "aufs" uses the same storage format as "ufs", utilizing
# POSIX-threads to avoid blocking the main Squid process on
# disk-I/O. This was formerly known in Squid as async-io.
#
# cache_dir aufs Directory-Name Mbytes L1 L2 [options]
#
# see argument descriptions under ufs above
#
# The diskd store type:
#
# "diskd" uses the same storage format as "ufs", utilizing a
# separate process to avoid blocking the main Squid process on
# disk-I/O.
#
# cache_dir diskd Directory-Name Mbytes L1 L2 [options] [Q1=n] [Q2=n]
#
# see argument descriptions under ufs above
#
# Q1 specifies the number of unacknowledged I/O requests when Squid
# stops opening new files. If this many messages are in the queues,
# Squid won't open new files. Default is 64
#
# Q2 specifies the number of unacknowledged messages when Squid
# starts blocking. If this many messages are in the queues,
# Squid blocks until it recevies some replies. Default is 72
#
# Common options:
#
# read-only, this cache_dir is read only.
#
# max-size=n, refers to the max object size this storedir supports.
# It is used to initially choose the storedir to dump the object.
# Note: To make optimal use of the max-size limits you should order
# the cache_dir lines with the smallest max-size value first and the
# ones with no max-size specification last.
#
#Default:
# cache_dir ufs /var/spool/squid 100 16 256
cache_dir ufs /cache 1000 16 256
# TAG: cache_access_log
# Logs the client request activity. Contains an entry for
# every HTTP and ICP queries received.
#
#Default:
# cache_access_log /var/log/squid/access.log
# TAG: cache_log
# Cache logging file. This is where general information about
# your cache's behavior goes. You can increase the amount of data
# logged to this file with the "debug_options" tag below.
#
#Default:
# cache_log /var/log/squid/cache.log
# TAG: cache_store_log
# Logs the activities of the storage manager. Shows which
# objects are ejected from the cache, and which objects are
# saved and for how long. To disable, enter "none". There are
# not really utilities to analyze this data, so you can safely
# disable it.
#
#Default:
# cache_store_log /var/log/squid/store.log
# TAG: cache_swap_log
# Location for the cache "swap.log." This log file holds the
# metadata of objects saved on disk. It is used to rebuild the
# cache during startup. Normally this file resides in each
# 'cache_dir' directory, but you may specify an alternate
# pathname here. Note you must give a full filename, not just
# a directory. Since this is the index for the whole object
# list you CANNOT periodically rotate it!
#
# If %s can be used in the file name then it will be replaced with a
# a representation of the cache_dir name where each / is replaced
# with '.'. This is needed to allow adding/removing cache_dir
# lines when cache_swap_log is being used.
#
# If have more than one 'cache_dir', and %s is not used in the name
# then these swap logs will have names such as:
#
# cache_swap_log.00
# cache_swap_log.01
# cache_swap_log.02
#
# The numbered extension (which is added automatically)
# corresponds to the order of the 'cache_dir' lines in this
# configuration file. If you change the order of the 'cache_dir'
# lines in this file, then these log files will NOT correspond to
# the correct 'cache_dir' entry (unless you manually rename
# them). We recommend that you do NOT use this option. It is
# better to keep these log files in each 'cache_dir' directory.
#
#Default:
# none
# TAG: emulate_httpd_log onoff
# The Cache can emulate the log file format which many 'httpd'
# programs use. To disable/enable this emulation, set
# emulate_httpd_log to 'off' or 'on'. The default
# is to use the native log format since it includes useful
# information that Squid-specific log analyzers use.
#
#Default:
# emulate_httpd_log off
# TAG: log_ip_on_direct onoff
# Log the destination IP address in the hierarchy log tag when going
# direct. Earlier Squid versions logged the hostname here. If you
# prefer the old way set this to off.
#
#Default:
# log_ip_on_direct on
# TAG: mime_table
# Pathname to Squid's MIME table. You shouldn't need to change
# this, but the default file contains examples and formatting
# information if you do.
#
#Default:
# mime_table /etc/squid/mime.conf
# TAG: log_mime_hdrs onoff
# The Cache can record both the request and the response MIME
# headers for each HTTP transaction. The headers are encoded
# safely and will appear as two bracketed fields at the end of
# the access log (for either the native or httpd-emulated log
# formats). To enable this logging set log_mime_hdrs to 'on'.
#
#Default:
# log_mime_hdrs off
# TAG: useragent_log
# Note: This option is only available if Squid is rebuilt with the
# --enable-useragent-log option
#
# Squid will write the User-Agent field from HTTP requests
# to the filename specified here. By default useragent_log
# is disabled.
#
#Default:
# none
# TAG: referer_log
# Note: This option is only available if Squid is rebuilt with the
# --enable-referer-log option
#
# Squid will write the Referer field from HTTP requests to the
# filename specified here. By default referer_log is disabled.
#
#Default:
# none
# TAG: pid_filename
# A filename to write the process-id to. To disable, enter "none".
#
#Default:
# pid_filename /var/run/squid.pid
# TAG: debug_options
# Logging options are set as section,level where each source file
# is assigned a unique section. Lower levels result in less
# output, Full debugging (level 9) can result in a very large
# log file, so be careful. The magic word "ALL" sets debugging
# levels for all sections. We recommend normally running with
# "ALL,1".
#
#Default:
# debug_options ALL,1
# TAG: log_fqdn onoff
# Turn this on if you wish to log fully qualified domain names
# in the access.log. To do this Squid does a DNS lookup of all
# IP's connecting to it. This can (in some situations) increase
# latency, which makes your cache seem slower for interactive
# browsing.
#
#Default:
# log_fqdn off
# TAG: client_netmask
# A netmask for client addresses in logfiles and cachemgr output.
# Change this to protect the privacy of your cache clients.
# A netmask of 255.255.255.0 will log all IP's in that range with
# the last digit set to '0'.
#
#Default:
# client_netmask 255.255.255.255
# OPTIONS FOR EXTERNAL SUPPORT PROGRAMS
# -----------------------------------------------------------------------------
# TAG: ftp_user
# If you want the anonymous login password to be more informative
# (and enable the use of picky ftp servers), set this to something
# reasonable for your domain, like wwwuser@somewhere.net
#
# The reason why this is domainless by default is that the
# request can be made on the behalf of a user in any domain,
# depending on how the cache is used.
# Some ftp server also validate that the email address is valid
# (for example perl.com).
#
#Default:
# ftp_user Squid@
# TAG: ftp_list_width
# Sets the width of ftp listings. This should be set to fit in
# the width of a standard browser. Setting this too small
# can cut off long filenames when browsing ftp sites.
#
#Default:
# ftp_list_width 32
# TAG: ftp_passive
# If your firewall does not allow Squid to use passive
# connections, then turn off this option.
#
#Default:
# ftp_passive on
# TAG: cache_dns_program
# Note: This option is only available if Squid is rebuilt with the
# --disable-internal-dns option
#
# Specify the location of the executable for dnslookup process.
#
#Default:
# cache_dns_program /usr/lib/squid/
# TAG: dns_children
# Note: This option is only available if Squid is rebuilt with the
# --disable-internal-dns option
#
# The number of processes spawn to service DNS name lookups.
# For heavily loaded caches on large servers, you should
# probably increase this value to at least 10. The maximum
# is 32. The default is 5.
#
# You must have at least one dnsserver process.
#
#Default:
# dns_children 5
# TAG: dns_retransmit_interval
# Initial retransmit interval for DNS queries. The interval is
# doubled each time all configured DNS servers have been tried.
#
#
#Default:
# dns_retransmit_interval 5 seconds
# TAG: dns_timeout
# DNS Query timeout. If no response is received to a DNS query
# within this time then all DNS servers for the queried domain
# is assumed to be unavailable.
#
#Default:
# dns_timeout 5 minutes
# TAG: dns_defnames onoff
# Note: This option is only available if Squid is rebuilt with the
# --disable-internal-dns option
#
# Normally the 'dnsserver' disables the RES_DEFNAMES resolver
# option (see res_init(3)). This prevents caches in a hierarchy
# from interpreting single-component hostnames locally. To allow
# dnsserver to handle single-component names, enable this
# option.
#
#Default:
# dns_defnames off
# TAG: dns_nameservers
# Use this if you want to specify a list of DNS name servers
# (IP addresses) to use instead of those given in your
# /etc/resolv.conf file.
#
# Example: dns_nameservers 10.0.0.1 192.172.0.4
#
#Default:
# none
# TAG: diskd_program
# Specify the location of the diskd executable.
# Note that this is only useful if you have compiled in
# diskd as one of the store io modules.
#
#Default:
# diskd_program /usr/lib/squid/diskd
# TAG: unlinkd_program
# Specify the location of the executable for file deletion process.
#
#Default:
# unlinkd_program /usr/lib/squid/unlinkd
# TAG: pinger_program
# Note: This option is only available if Squid is rebuilt with the
# --enable-icmp option
#
# Specify the location of the executable for the pinger process.
# This is only useful if you configured Squid (during compilation)
# with the '--enable-icmp' option.
#
#Default:
# pinger_program /usr/lib/squid/
# TAG: redirect_program
# Specify the location of the executable for the URL redirector.
# Since they can perform almost any function there isn't one included.
# See the Release-Notes for information on how to write one.
# By default, a redirector is not used.
#
#Default:
# none
# TAG: redirect_children
# The number of redirector processes to spawn. If you start
# too few Squid will have to wait for them to process a backlog of
# URLs, slowing it down. If you start too many they will use RAM
# and other system resources.
#
#Default:
# redirect_children 5
# TAG: redirect_rewrites_host_header
# By default Squid rewrites any Host: header in redirected
# requests. If you are running a accelerator then this may
# not be a wanted effect of a redirector.
#
#Default:
# redirect_rewrites_host_header on
# TAG: redirector_access
# If defined, this access list specifies which requests are
# sent to the redirector processes. By default all requests
# are sent.
#
#Default:
# none
# TAG: authenticate_program
# Specify the command for the external authenticator. Such a
# program reads a line containing "username password" and replies
# "OK" or "ERR" in an endless loop. If you use an authenticator,
# make sure you have 1 acl of type proxy_auth. By default, the
# authenticator_program is not used.
#
# If you want to use the traditional proxy authentication,
# jump over to the ../auth_modules/NCSA directory and
# type:
# % make
# % make install
#
# Then, set this line to something like
#
# authenticate_program /usr/bin/ncsa_auth /usr/etc/passwd
#
#Default:
# none
# TAG: authenticate_children
# The number of authenticator processes to spawn (default 5). If you
# start too few Squid will have to wait for them to process a backlog
# of usercode/password verifications, slowing it down. When password
# verifications are done via a (slow) network you are likely to need
# lots of authenticator processes.
#
#Default:
# authenticate_children 5
# TAG: authenticate_ttl
# The time a checked username/password combination remains cached.
# If a wrong password is given for a cached user, the user gets
# removed from the username/password cache forcing a revalidation.
#
#Default:
# authenticate_ttl 1 hour
# TAG: authenticate_ip_ttl
# With this option you control how long a proxy authentication
# will be bound to a specific IP address. If a request using
# the same user name is received during this time then access
# will be denied and both users are required to reauthenticate
# them selves. The idea behind this is to make it annoying
# for people to share their password to their friends, but
# yet allow a dialup user to reconnect on a different dialup
# port.
#
# The default is 0 to disable the check. Recommended value
# if you have dialup users are no more than 60 seconds to allow
# the user to redial without hassle. If all your users are
# stationary then higher values may be used.
#
# See also authenticate_ip_ttl_is_strict
#
#Default:
# authenticate_ip_ttl 0 seconds
# TAG: authenticate_ip_ttl_is_strict
# This option makes authenticate_ip_ttl a bit stricted. With this
# enabled authenticate_ip_ttl will deny all access from other IP
# addresses until the TTL has expired, and the IP address "owning"
# the userid will not be forced to reauthenticate.
#
#Default:
# authenticate_ip_ttl_is_strict on
# OPTIONS FOR TUNING THE CACHE
# -----------------------------------------------------------------------------
# TAG: wais_relay_host
# TAG: wais_relay_port
# Relay WAIS request to host (1st arg) at port (2 arg).
#
#Default:
# wais_relay_port 0
# TAG: request_header_max_size (KB)
# This specifies the maximum size for HTTP headers in a request.
# Request headers are usually relatively small (about 512 bytes).
# Placing a limit on the request header size will catch certain
# bugs (for example with persistent connections) and possibly
# buffer-overflow or denial-of-service attacks.
#
#Default:
# request_header_max_size 10 KB
# TAG: request_body_max_size (KB)
# This specifies the maximum size for an HTTP request body.
# In other words, the maximum size of a PUT/POST request.
# A user who attempts to send a request with a body larger
# than this limit receives an "Invalid Request" error message.
# If you set this parameter to a zero, there will be no limit
# imposed.
#
#Default:
# request_body_max_size 1 MB
# TAG: reply_body_max_size (KB)
# This option specifies the maximum size of a reply body. It
# can be used to prevent users from downloading very large files,
# such as MP3's and movies. The reply size is checked twice.
# First when we get the reply headers, we check the
# content-length value. If the content length value exists and
# is larger than this parameter, the request is denied and the
# user receives an error message that says "the request or reply
# is too large." If there is no content-length, and the reply
# size exceeds this limit, the client's connection is just closed
# and they will receive a partial reply.
#
# NOTE: downstream caches probably can not detect a partial reply
# if there is no content-length header, so they will cache
# partial responses and give them out as hits. You should NOT
# use this option if you have downstream caches.
#
# If you set this parameter to zero (the default), there will be
# no limit imposed.
#
#Default:
# reply_body_max_size 0
# TAG: refresh_pattern
# usage: refresh_pattern [-i] regex min percent max [options]
#
# By default, regular expressions are CASE-SENSITIVE. To make
# them case-insensitive, use the -i option.
#
# 'Min' is the time (in minutes) an object without an explicit
# expiry time should be considered fresh. The recommended
# value is 0, any higher values may cause dynamic applications
# to be erroneously cached unless the application designer
# has taken the appropriate actions.
#
# 'Percent' is a percentage of the objects age (time since last
# modification age) an object without explicit expiry time
# will be considered fresh.
#
# 'Max' is an upper limit on how long objects without an explicit
# expiry time will be considered fresh.
#
# options: overrsde-expire
# override-lastmod
# reload-into-ims
# ignore-reload
#
# override-expire enforces min age even if the server
# sent a Expires: header. Doing this VIOLATES the HTTP
# standard. Enabling this feature could make you liable
# for problems which it causes.
#
# override-lastmod enforces min age even on objects
# that was modified recently.
#
# reload-into-ims changes client no-cache or ``reload''
# to If-Modified-Since requests. Doing this VIOLATES the
# HTTP standard. Enabling this feature could make you
# liable for problems which it causes.
#
# ignore-reload ignores a client no-cache or ``reload''
# header. Doing this VIOLATES the HTTP standard. Enabling
# this feature could make you liable for problems which
# it causes.
#
# Please see the file doc/Release-Notes-1.1.txt for a full
# description of Squid's refresh algorithm. Basically a
# cached object is: (the order is changed from 1.1.X)
#
# FRESH if expires < now, else STALE
# STALE if age > max
# FRESH if lm-factor < percent, else STALE
# FRESH if age < min
# else STALE
#
# The refresh_pattern lines are checked in the order listed here.
# The first entry which matches is used. If none of the entries
# match, then the default will be used.
#
# Note, you must uncomment all the default lines if you want
# to change one. The default setting is only active if none is
# used.
#
#Default:
# refresh_pattern ^ftp: 1440 20% 10080
# refresh_pattern ^gopher: 1440 0% 1440
# refresh_pattern . 0 20% 4320
# TAG: reference_age
# As a part of normal operation, Squid performs Least Recently
# Used removal of cached objects. The LRU age for removal is
# computed dynamically, based on the amount of disk space in
# use. The dynamic value can be seen in the Cache Manager 'info'
# output.
#
# The 'reference_age' parameter defines the maximum LRU age. For
# example, setting reference_age to '1 week' will cause objects
# to be removed if they have not been accessed for a week or
# more. The default value is one year.
#
# Specify a number here, followed by units of time. For example:
# 1 week
# 3.5 days
# 4 months
# 2.2 hours
#
# NOTE: this parameter is not used when using the enhanced
# replacement policies, GDSH or LFUDA.
#
#Default:
# reference_age 1 year
# TAG: quick_abort_min (KB)
# TAG: quick_abort_max (KB)
# TAG: quick_abort_pct (percent)
# The cache can be configured to continue downloading aborted
# requests. This may be undesirable on slow (e.g. SLIP) links
# and/or very busy caches. Impatient users may tie up file
# descriptors and bandwidth by repeatedly requesting and
# immediately aborting downloads.
#
# When the user aborts a request, Squid will check the
# quick_abort values to the amount of data transfered until
# then.
#
# If the transfer has less than 'quick_abort_min' KB remaining,
# it will finish the retrieval. Setting 'quick_abort_min' to -1
# will disable the quick_abort feature.
#
# If the transfer has more than 'quick_abort_max' KB remaining,
# it will abort the retrieval.
#
# If more than 'quick_abort_pct' of the transfer has completed,
# it will finish the retrieval.
#
#Default:
# quick_abort_min 16 KB
# quick_abort_max 16 KB
# quick_abort_pct 95
# TAG: negative_ttl time-units
# Time-to-Live (TTL) for failed requests. Certain types of
# failures (such as "connection refused" and "404 Not Found") are
# negatively-cached for a configurable amount of time. The
# default is 5 minutes. Note that this is different from
# negative caching of DNS lookups.
#
#Default:
# negative_ttl 5 minutes
# TAG: positive_dns_ttl time-units
# Time-to-Live (TTL) for positive caching of successful DNS lookups.
# Default is 6 hours (360 minutes). If you want to minimize the
# use of Squid's ipcache, set this to 1, not 0.
#
#Default:
# positive_dns_ttl 6 hours
# TAG: negative_dns_ttl time-units
# Time-to-Live (TTL) for negative caching of failed DNS lookups.
#
#Default:
# negative_dns_ttl 5 minutes
# TAG: range_offset_limit (bytes)
# Sets a upper limit on how far into the the file a Range request
# may be to cause Squid to prefetch the whole file. If beyond this
# limit then Squid forwards the Range request as it is and the result
# is NOT cached.
#
# This is to stop a far ahead range request (lets say start at 17MB)
# from making Squid fetch the whole object up to that point before
# sending anything to the client.
#
# A value of -1 causes Squid to always fetch the object from the
# beginning so that it may cache the result. (2.0 style)
#
# A value of 0 causes Squid to never fetch more than the
# client requested. (default)
#
#Default:
# range_offset_limit 0 KB
# TIMEOUTS
# -----------------------------------------------------------------------------
# TAG: connect_timeout time-units
# Some systems (notably Linux) can not be relied upon to properly
# time out connect(2) requests. Therefore the Squid process
# enforces its own timeout on server connections. This parameter
# specifies how long to wait for the connect to complete. The
# default is two minutes (120 seconds).
#
#Default:
# connect_timeout 2 minutes
# TAG: peer_connect_timeout time-units
# This parameter specifies how long to wait for a pending TCP
# connection to a peer cache. The default is 30 seconds. You
# may also set different timeout values for individual neighbors
# with the 'connect-timeout' option on a 'cache_peer' line.
#
#Default:
# peer_connect_timeout 30 seconds
# TAG: siteselect_timeout time-units
# For URN to multiple URL's URL selection
#
#Default:
# siteselect_timeout 4 seconds
# TAG: read_timeout time-units
# The read_timeout is applied on server-side connections. After
# each successful read(), the timeout will be extended by this
# amount. If no data is read again after this amount of time,
# the request is aborted and logged with ERR_READ_TIMEOUT. The
# default is 15 minutes.
#
#Default:
# read_timeout 15 minutes
# TAG: request_timeout
# How long to wait for an HTTP request after connection
# establishment. For persistent connections, wait this long
# after the previous request completes.
#
#Default:
# request_timeout 30 seconds
# TAG: client_lifetime time-units
# The maximum amount of time that a client (browser) is allowed to
# remain connected to the cache process. This protects the Cache
# from having a lot of sockets (and hence file descriptors) tied up
# in a CLOSE_WAIT state from remote clients that go away without
# properly shutting down (either because of a network failure or
# because of a poor client implementation). The default is one
# day, 1440 minutes.
#
# NOTE: The default value is intended to be much larger than any
# client would ever need to be connected to your cache. You
# should probably change client_lifetime only as a last resort.
# If you seem to have many client connections tying up
# filedescriptors, we recommend first tuning the read_timeout,
# request_timeout, pconn_timeout and quick_abort values.
#
#Default:
# client_lifetime 1 day
# TAG: half_closed_clients
# Some clients may shutdown the sending side of their TCP
# connections, while leaving their receiving sides open. Sometimes,
# Squid can not tell the difference between a half-closed and a
# fully-closed TCP connection. By default, half-closed client
# connections are kept open until a read(2) or write(2) on the
# socket returns an error. Change this option to 'off' and Squid
# will immediately close client connections when read(2) returns
# "no more data to read."
#
#Default:
# half_closed_clients on
# TAG: pconn_timeout
# Timeout for idle persistent connections to servers and other
# proxies.
#
#Default:
# pconn_timeout 120 seconds
# TAG: ident_timeout
# Maximum time to wait for IDENT requests. If this is too high,
# and you enabled 'ident_lookup', then you might be susceptible
# to denial-of-service by having many ident requests going at
# once.
#
# Only src type ACL checks are fully supported. A src_domain
# ACL might work at times, but it will not always provide
# the correct result.
#
# This option may be disabled by using --disable-ident with
# the configure script.
#
#Default:
# ident_timeout 10 seconds
# TAG: shutdown_lifetime time-units
# When SIGTERM or SIGHUP is received, the cache is put into
# "shutdown pending" mode until all active sockets are closed.
# This value is the lifetime to set for all open descriptors
# during shutdown mode. Any active clients after this many
# seconds will receive a 'timeout' message.
#
#Default:
# shutdown_lifetime 30 seconds
# ACCESS CONTROLS
# -----------------------------------------------------------------------------
# TAG: acl
# Defining an Access List
#
# acl aclname acltype string1 ...
# acl aclname acltype "file" ...
#
# when using "file", the file should contain one item per line
#
# acltype is one of src dst srcdomain dstdomain url_pattern
# urlpath_pattern time port proto method browser user
#
# By default, regular expressions are CASE-SENSITIVE. To make
# them case-insensitive, use the -i option.
#
# acl aclname src ip-address/netmask ... (clients IP address)
# acl aclname src addr1-addr2/netmask ... (range of addresses)
# acl aclname dst ip-address/netmask ... (URL host's IP address)
# acl aclname myip ip-address/netmask ... (local socket IP address)
#
# acl aclname srcdomain .foo.com ... # reverse lookup, client IP
# acl aclname dstdomain .foo.com ... # Destination server from URL
# acl aclname srcdom_regex [-i] xxx ... # regex matching client name
# acl aclname dstdom_regex [-i] xxx ... # regex matching server
# # For dstdomain and dstdom_regex a reverse lookup is tried if a IP
# # based URL is used. The name "none" is used if the reverse lookup
# # fails.
#
# acl aclname time [day-abbrevs] [h1:m1-h2:m2]
# day-abbrevs:
# S - Sunday
# M - Monday
# T - Tuesday
# W - Wednesday
# H - Thursday
# F - Friday
# A - Saturday
# h1:m1 must be less than h2:m2
# acl aclname url_regex [-i] ^http:// ... # regex matching on whole URL
# acl aclname urlpath_regex [-i] \.gif$ ... # regex matching on URL path
# acl aclname port 80 70 21 ...
# acl aclname port 0-1024 ... # ranges allowed
# acl aclname myport 3128 ... # (local socket TCP port)
# acl aclname proto HTTP FTP ...
# acl aclname method GET POST ...
# acl aclname browser [-i] regexp
# # pattern match on User-Agent header
# acl aclname ident username ...
# acl aclname ident_regex [-i] pattern ...
# # string match on ident output.
# # use REQUIRED to accept any non-null ident.
# acl aclname src_as number ...
# acl aclname dst_as number ...
# # Except for access control, AS numbers can be used for
# # routing of requests to specific caches. Here's an
# # example for routing all requests for AS#1241 and only
# # those to mycache.mydomain.net:
# # acl asexample dst_as 1241
# # cache_peer_access mycache.mydomain.net allow asexample
# # cache_peer_access mycache_mydomain.net deny all
#
# acl aclname proxy_auth username ...
# acl aclname proxy_auth_regex [-i] pattern ...
# # list of valid usernames
# # use REQUIRED to accept any valid username.
# #
# # NOTE: when a Proxy-Authentication header is sent but it is not
# # needed during ACL checking the username is NOT logged
# # in access.log.
# #
# # NOTE: proxy_auth requires a EXTERNAL authentication program
# # to check username/password combinations (see
# # authenticate_program).
# #
# # WARNING: proxy_auth can't be used in a transparent proxy. It
# # collides with any authentication done by origin servers. It may
# # seem like it works at first, but it doesn't.
#
# acl aclname snmp_community string ...
# # A community string to limit access to your SNMP Agent
# # Example:
# #
# # acl snmppublic snmp_community public
#
# acl aclname maxconn number
# # This will be matched when the client's IP address has
# # more than HTTP connections established.
#
# acl req_mime_type mime-type1 ...
# # regex match agains the mime type of the request generated
# # by the client. Can be used to detect file upload or some
# # types HTTP tunelling requests.
# # NOTE: This does NOT match the reply. You cannot use this
# # to match the returned file type.
#
#Examples:
#acl myexample dst_as 1241
#acl password proxy_auth REQUIRED
#acl fileupload req_mime_type -i ^multipart/form-data$
#
#Recommended minimum configuration:
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localnet src 192.168.1.0/255.255.255.0
acl localhost src 127.0.0.1/255.255.255.255
acl SSL_ports port 443 563
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
# TAG: http_access
# Allowing or Denying access based on defined access lists
#
# Access to the HTTP port:
# http_access allowdeny [!]aclname ...
#
# NOTE on default values:
#
# If there are no "access" lines present, the default is to deny
# the request.
#
# If none of the "access" lines cause a match, the default is the
# opposite of the last line in the list. If the last line was
# deny, then the default is allow. Conversely, if the last line
# is allow, the default will be deny. For these reasons, it is a
# good idea to have an "deny all" or "allow all" entry at the end
# of your access lists to avoid potential confusion.
#
#Default:
# http_access deny all
#
#Recommended minimum configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
# Deny requests to unknown ports
http_access deny !Safe_ports
# Deny CONNECT to other than SSL ports
http_access deny CONNECT !SSL_ports
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
# And finally deny all other access to this proxy
http_access allow localnet
http_access allow localhost
http_access deny all
# TAG: icp_access
# Allowing or Denying access to the ICP port based on defined
# access lists
#
# icp_access allowdeny [!]aclname ...
#
# See http_access for details
#
#Default:
# icp_access deny all
#
#Allow ICP queries from eveyone
icp_access allow all
# TAG: miss_access
# Use to force your neighbors to use you as a sibling instead of
# a parent. For example:
#
# acl localclients src 172.16.0.0/16
# miss_access allow localclients
# miss_access deny !localclients
#
# This means that only your local clients are allowed to fetch
# MISSES and all other clients can only fetch HITS.
#
# By default, allow all clients who passed the http_access rules
# to fetch MISSES from us.
#
#Default setting:
# miss_access allow all
# TAG: cache_peer_access
# Similar to 'cache_peer_domain' but provides more flexibility by
# using ACL elements.
#
# cache_peer_access cache-host allowdeny [!]aclname ...
#
# The syntax is identical to 'http_access' and the other lists of
# ACL elements. See the comments for 'http_access' below, or
# the Squid FAQ (http://www.squid-cache.org/FAQ/FAQ-10.html).
#
#Default:
# none
# TAG: proxy_auth_realm
# Specifies the realm name which is to be reported to the client for
# proxy authentication (part of the text the user will see when
# prompted their username and password).
#
#Default:
# proxy_auth_realm Squid proxy-caching web server
# TAG: ident_lookup_access
# A list of ACL elements which, if matched, cause an ident
# (RFC 931) lookup to be performed for this request. For
# example, you might choose to always perform ident lookups
# for your main multi-user Unix boxes, but not for your Macs
# and PCs. By default, ident lookups are not performed for
# any requests.
#
# To enable ident lookups for specific client addresses, you
# can follow this example:
#
# acl ident_aware_hosts src 198.168.1.0/255.255.255.0
# ident_lookup_access allow ident_aware_hosts
# ident_lookup_access deny all
#
# This option may be disabled by using --disable-ident with
# the configure script.
#
#Default:
# ident_lookup_access deny all
# ADMINISTRATIVE PARAMETERS
# -----------------------------------------------------------------------------
# TAG: cache_mgr
# Email-address of local cache manager who will receive
# mail if the cache dies. The default is "webmaster."
#cache_mgr root
#
#Default:
# cache_mgr root
cache_mgr root
# TAG: cache_effective_user
# TAG: cache_effective_group
#
# If the cache is run as root, it will change its effective/real
# UID/GID to the UID/GID specified below. The default is to
# change to UID to squid and GID to squid.
#
# If Squid is not started as root, the default is to keep the
# current UID/GID. Note that if Squid is not started as root then
# you cannot set http_port to a value lower than 1024.
#
#cache_effective_user squid
#cache_effective_group squid
#
#Default:
# cache_effective_user squid
# cache_effective_group squid
cache_effective_user squid
cache_effective_group squid
# TAG: visible_hostname
# If you want to present a special hostname in error messages, etc,
# then define this. Otherwise, the return value of gethostname()
# will be used. If you have multiple caches in a cluster and
# get errors about IP-forwarding you must set them to have individual
# names with this setting.
#
#Default:
# none
# TAG: unique_hostname
# If you want to have multiple machines with the same
# 'visible_hostname' then you must give each machine a different
# 'unique_hostname' so that forwarding loops can be detected.
#
#Default:
# none
# TAG: hostname_aliases
# A list of other DNS names that your cache has.
#
#Default:
# none
# OPTIONS FOR THE CACHE REGISTRATION SERVICE
# -----------------------------------------------------------------------------
#
# This section contains parameters for the (optional) cache
# announcement service. This service is provided to help
# cache administrators locate one another in order to join or
# create cache hierarchies.
#
# An 'announcement' message is sent (via UDP) to the registration
# service by Squid. By default, the announcement message is NOT
# SENT unless you enable it with 'announce_period' below.
#
# The announcement message includes your hostname, plus the
# following information from this configuration file:
#
# http_port
# icp_port
# cache_mgr
#
# All current information is processed regularly and made
# available on the Web at http://www.ircache.net/Cache/Tracker/.
# TAG: announce_period
# This is how frequently to send cache announcements. The
# default is `0' which disables sending the announcement
# messages.
#
# To enable announcing your cache, just uncomment the line
# below.
#
#Default:
# announce_period 0
#
#To enable announcing your cache, just uncomment the line below.
#announce_period 1 day
# TAG: announce_host
# TAG: announce_file
# TAG: announce_port
# announce_host and announce_port set the hostname and port
# number where the registration message will be sent.
#
# Hostname will default to 'tracker.ircache.net' and port will
# default default to 3131. If the 'filename' argument is given,
# the contents of that file will be included in the announce
# message.
#
#Default:
# announce_host tracker.ircache.net
# announce_port 3131
# HTTPD-ACCELERATOR OPTIONS
# -----------------------------------------------------------------------------
# TAG: httpd_accel_host
# TAG: httpd_accel_port
# If you want to run Squid as an httpd accelerator, define the
# host name and port number where the real HTTP server is.
#
# If you want virtual host support then specify the hostname
# as "virtual".
#
# If you want virtual port support then specify the port as "0".
#
# NOTE: enabling httpd_accel_host disables proxy-caching and
# ICP. If you want these features enabled also, then set
# the 'httpd_accel_with_proxy' option.
#
#Default:
# httpd_accel_port 80
# TAG: httpd_accel_single_host onoff
# If you are running Squid as a accelerator and have a single backend
# server then set this to on. This causes Squid to forward the request
# to this server irregardles of what any redirectors or Host headers
# says.
#
# Leave this at off if you have multiple backend servers, and use a
# redirector (or host table or private DNS) to map the requests to the
# appropriate backend servers. Note that the mapping needs to be a
# 1-1 mapping between requested and backend (from redirector) domain
# names or caching will fail, as cacing is performed using the
# URL returned from the redirector.
#
# See also redirect_rewrites_host_header.
#
#Default:
# httpd_accel_single_host off
# TAG: httpd_accel_with_proxy onoff
# If you want to use Squid as both a local httpd accelerator
# and as a proxy, change this to 'on'. Note however that your
# proxy users may have trouble to reach the accelerated domains
# unless their browsers are configured not to use this proxy for
# those domains (for example via the no_proxy browser configuration
# setting)
#
#Default:
# httpd_accel_with_proxy off
# TAG: httpd_accel_uses_host_header onoff
# HTTP/1.1 requests include a Host: header which is basically the
# hostname from the URL. Squid can be an accelerator for
# different HTTP servers by looking at this header. However,
# Squid does NOT check the value of the Host header, so it opens
# a big security hole. We recommend that this option remain
# disabled unless you are sure of what you are doing.
#
# However, you will need to enable this option if you run Squid
# as a transparent proxy. Otherwise, virtual servers which
# require the Host: header will not be properly cached.
#
#Default:
# httpd_accel_uses_host_header off
# MISCELLANEOUS
# -----------------------------------------------------------------------------
# TAG: dns_testnames
# The DNS tests exit as soon as the first site is successfully looked up
#
# This test can be disabled with the -D command line option.
#
#Default:
# dns_testnames netscape.com internic.net nlanr.net microsoft.com
# TAG: logfile_rotate
# Specifies the number of logfile rotations to make when you
# type 'squid -k rotate'. The default is 10, which will rotate
# with extensions 0 through 9. Setting logfile_rotate to 0 will
# disable the rotation, but the logfiles are still closed and
# re-opened. This will enable you to rename the logfiles
# yourself just before sending the rotate signal.
#
# Note, the 'squid -k rotate' command normally sends a USR1
# signal to the running squid process. In certain situations
# (e.g. on Linux with Async I/O), USR1 is used for other
# purposes, so -k rotate uses another signal. It is best to get
# in the habit of using 'squid -k rotate' instead of 'kill -USR1
#'.
#
#logfile_rotate 0
#
#Default:
# logfile_rotate 0
logfile_rotate 0
# TAG: append_domain
# Appends local domain name to hostnames without any dots in
# them. append_domain must begin with a period.
#
#Example:
# append_domain .yourdomain.com
#
#Default:
# none
# TAG: tcp_recv_bufsize (bytes)
# Size of receive buffer to set for TCP sockets. Probably just
# as easy to change your kernel's default. Set to zero to use
# the default buffer size.
#
#Default:
# tcp_recv_bufsize 0 bytes
# TAG: err_html_text
# HTML text to include in error messages. Make this a "mailto"
# URL to your admin address, or maybe just a link to your
# organizations Web page.
#
# To include this in your error messages, you must rewrite
# the error template files (found in the "errors" directory).
# Wherever you want the 'err_html_text' line to appear,
# insert a %L tag in the error template file.
#
#Default:
# none
# TAG: deny_info
# Usage: deny_info err_page_name acl
# Example: deny_info ERR_CUSTOM_ACCESS_DENIED bad_guys
#
# This can be used to return a ERR_ page for requests which
# do not pass the 'http_access' rules. A single ACL will cause
# the http_access check to fail. If a 'deny_info' line exists
# for that ACL then Squid returns a corresponding error page.
#
# You may use ERR_ pages that come with Squid or create your own pages
# and put them into the configured errors/ directory.
#
#Default:
# none
# TAG: memory_pools onoff
# If set, Squid will keep pools of allocated (but unused) memory
# available for future use. If memory is a premium on your
# system and you believe your malloc library outperforms Squid
# routines, disable this.
#
#Default:
# memory_pools on
# TAG: memory_pools_limit (bytes)
# Used only with memory_pools on:
# memory_pools_limit 50 MB
#
# If set to a non-zero value, Squid will keep at most the specified
# limit of allocated (but unused) memory in memory pools. All free()
# requests that exceed this limit will be handled by your malloc
# library. Squid does not pre-allocate any memory, just safe-keeps
# objects that otherwise would be free()d. Thus, it is safe to set
# memory_pools_limit to a reasonably high value even if your
# configuration will use less memory.
#
# If not set (default) or set to zero, Squid will keep all memory it
# can. That is, there will be no limit on the total amount of memory
# used for safe-keeping.
#
# To disable memory allocation optimization, do not set
# memory_pools_limit to 0. Set memory_pools to "off" instead.
#
# An overhead for maintaining memory pools is not taken into account
# when the limit is checked. This overhead is close to four bytes per
# object kept. However, pools may actually _save_ memory because of
# reduced memory thrashing in your malloc library.
#
#Default:
# none
# TAG: forwarded_for onoff
# If set, Squid will include your system's IP address or name
# in the HTTP requests it forwards. By default it looks like
# this:
#
# X-Forwarded-For: 192.1.2.3
#
# If you disable this, it will appear as
#
# X-Forwarded-For: unknown
#
#Default:
# forwarded_for on
# TAG: log_icp_queries onoff
# If set, ICP queries are logged to access.log. You may wish
# do disable this if your ICP load is VERY high to speed things
# up or to simplify log analysis.
#
#Default:
# log_icp_queries on
log_icp_queries off
# TAG: icp_hit_stale onoff
# If you want to return ICP_HIT for stale cache objects, set this
# option to 'on'. If you have sibling relationships with caches
# in other administrative domains, this should be 'off'. If you only
# have sibling relationships with caches under your control, then
# it is probably okay to set this to 'on'.
#
#Default:
# icp_hit_stale off
# TAG: minimum_direct_hops
# If using the ICMP pinging stuff, do direct fetches for sites
# which are no more than this many hops away.
#
#Default:
# minimum_direct_hops 4
# TAG: minimum_direct_rtt
# If using the ICMP pinging stuff, do direct fetches for sites
# which are no more than this many rtt milliseconds away.
#
#Default:
# minimum_direct_rtt 400
# TAG: cachemgr_passwd
# Specify passwords for cachemgr operations.
#
# Usage: cachemgr_passwd password action action ...
#
# Some valid actions are (see cache manager menu for a full list):
# 5min
# 60min
# asndb
# authenticator
# cbdata
# client_list
# comm_incoming
# config *
# counters
# delay
# digest_stats
# dns
# events
# filedescriptors
# fqdncache
# histograms
# http_headers
# info
# io
# ipcache
# mem
# menu
# netdb
# non_peers
# objects
# pconn
# peer_select
# redirector
# refresh
# server_list
# shutdown *
# store_digest
# storedir
# utilization
# via_headers
# vm_objects
#
# * Indicates actions which will not be performed without a
# valid password, others can be performed if not listed here.
#
# To disable an action, set the password to "disable".
# To allow performing an action without a password, set the
# password to "none".
#
# Use the keyword "all" to set the same password for all actions.
#
#Example:
# cachemgr_passwd secret shutdown
# cachemgr_passwd lesssssssecret info stats/objects
# cachemgr_passwd disable all
#
#Default:
# none
cachemgr_passwd my-secret-pass all
# TAG: store_avg_object_size (kbytes)
# Average object size, used to estimate number of objects your
# cache can hold. See doc/Release-Notes-1.1.txt. The default is
# 13 KB.
#
#Default:
# store_avg_object_size 13 KB
# TAG: store_objects_per_bucket
# Target number of objects per bucket in the store hash table.
# Lowering this value increases the total number of buckets and
# also the storage maintenance rate. The default is 50.
#
#Default:
# store_objects_per_bucket 20
# TAG: client_db onoff
# If you want to disable collecting per-client statistics, then
# turn off client_db here.
#
#Default:
# client_db on
# TAG: netdb_low
# TAG: netdb_high
# The low and high water marks for the ICMP measurement
# database. These are counts, not percents. The defaults are
# 900 and 1000. When the high water mark is reached, database
# entries will be deleted until the low mark is reached.
#
#Default:
# netdb_low 900
# netdb_high 1000
# TAG: netdb_ping_period
# The minimum period for measuring a site. There will be at
# least this much delay between successive pings to the same
# network. The default is five minutes.
#
#Default:
# netdb_ping_period 5 minutes
# TAG: query_icmp onoff
# If you want to ask your peers to include ICMP data in their ICP
# replies, enable this option.
#
# If your peer has configured Squid (during compilation) with
# '--enable-icmp' then that peer will send ICMP pings to origin server
# sites of the URLs it receives. If you enable this option then the
# ICP replies from that peer will include the ICMP data (if available).
# Then, when choosing a parent cache, Squid will choose the parent with
# the minimal RTT to the origin server. When this happens, the
# hierarchy field of the access.log will be
# "CLOSEST_PARENT_MISS". This option is off by default.
#
#Default:
# query_icmp off
# TAG: test_reachability onoff
# When this is 'on', ICP MISS replies will be ICP_MISS_NOFETCH
# instead of ICP_MISS if the target host is NOT in the ICMP
# database, or has a zero RTT.
#
#Default:
# test_reachability off
# TAG: buffered_logs onoff
# Some log files (cache.log, useragent.log) are written with
# stdio functions, and as such they can be buffered or
# unbuffered. By default they will be unbuffered. Buffering them
# can speed up the writing slightly (though you are unlikely to
# need to worry).
#
#Default:
# buffered_logs off
buffered_logs on
# TAG: reload_into_ims onoff
# When you enable this option, client no-cache or ``reload''
# requests will be changed to If-Modified-Since requests.
# Doing this VIOLATES the HTTP standard. Enabling this
# feature could make you liable for problems which it
# causes.
#
# see also refresh_pattern for a more selective approach.
#
# This option may be disabled by using --disable-http-violations
# with the configure script.
#
#Default:
# reload_into_ims off
# TAG: always_direct
# Usage: always_direct allowdeny [!]aclname ...
#
# Here you can use ACL elements to specify requests which should
# ALWAYS be forwarded directly to origin servers. For example,
# to always directly forward requests for local servers use
# something like:
#
# acl local-servers dstdomain my.domain.net
# always_direct allow local-servers
#
# To always forward FTP requests directly, use
#
# acl FTP proto FTP
# always_direct allow FTP
#
# NOTE: There is a similar, but opposite option named
# 'never_direct'. You need to be aware that "always_direct deny
# foo" is NOT the same thing as "never_direct allow foo". You
# may need to use a deny rule to exclude a more-specific case of
# some other rule. Example:
#
# acl local-external dstdomain external.foo.net
# acl local-servers dstdomain foo.net
# always_direct deny local-external
# always_direct allow local-servers
#
# This option replaces some v1.1 options such as local_domain
# and local_ip.
#
#Default:
# none
# TAG: never_direct
# Usage: never_direct allowdeny [!]aclname ...
#
# never_direct is the opposite of always_direct. Please read
# the description for always_direct if you have not already.
#
# With 'never_direct' you can use ACL elements to specify
# requests which should NEVER be forwarded directly to origin
# servers. For example, to force the use of a proxy for all
# requests, except those in your local domain use something like:
#
# acl local-servers dstdomain foo.net
# acl all src 0.0.0.0/0.0.0.0
# never_direct deny local-servers
# never_direct allow all
#
# or if squid is inside a firewall and there is local intranet
# servers inside the firewall then use something like:
#
# acl local-intranet dstdomain foo.net
# acl local-external dstdomain external.foo.net
# always_direct deny local-external
# always_direct allow local-intranet
# never_direct allow all
#
# This option replaces some v1.1 options such as inside_firewall
# and firewall_ip.
#
#Default:
# none
# TAG: anonymize_headers
# Usage: anonymize_headers allowdeny header_name ...
#
# This option replaces the old 'http_anonymizer' option with
# something that is much more configurable. You may now
# specify exactly which headers are to be allowed, or which
# are to be removed from outgoing requests.
#
# There are two methods of using this option. You may either
# allow specific headers (thus denying all others), or you
# may deny specific headers (thus allowing all others).
#
# For example, to achieve the same behavior as the old
# 'http_anonymizer standard' option, you should use:
#
# anonymize_headers deny From Referer Server
# anonymize_headers deny User-Agent WWW-Authenticate Link
#
# Or, to reproduce the old 'http_anonymizer paranoid' feature
# you should use:
#
# anonymize_headers allow Allow Authorization Cache-Control
# anonymize_headers allow Content-Encoding Content-Length
# anonymize_headers allow Content-Type Date Expires Host
# anonymize_headers allow If-Modified-Since Last-Modified
# anonymize_headers allow Location Pragma Accept
# anonymize_headers allow Accept-Encoding Accept-Language
# anonymize_headers allow Content-Language Mime-Version
# anonymize_headers allow Retry-After Title Connection
# anonymize_headers allow Proxy-Connection
#
# NOTE: You can not mix "allow" and "deny". All 'anonymize_headers'
# lines must have the same second argument.
#
# By default, all headers are allowed (no anonymizing is
# performed).
#
#Default:
# none
# TAG: fake_user_agent
# If you filter the User-Agent header with 'anonymize_headers' it
# may cause some Web servers to refuse your request. Use this to
# fake one up. For example:
#
# fake_user_agent Nutscrape/1.0 (CP/M; 8-bit)
# (credit to Paul Southworth pauls@etext.org for this one!)
#
#Default:
# none
# TAG: icon_directory
# Where the icons are stored. These are normally kept in
# /usr/lib/squid/icons
#
#Default:
# icon_directory /usr/lib/squid/icons
# TAG: error_directory
# Directory where the error files are read from.
# /usr/lib/squid/errors contains sets of error files
# in different languages. The default error directory
# is /etc/squid/errors, which is a link to one of these
# error sets.
#
# If you wish to create your own versions of the error files,
# either to customize them to suit your language or company,
# copy the template English files to another
# directory and point this tag at them.
#
#error_directory /etc/squid/errors
#
#Default:
# error_directory /etc/squid/errors
# TAG: minimum_retry_timeout (seconds)
# This specifies the minimum connect timeout, for when the
# connect timeout is reduced to compensate for the availability
# of multiple IP addresses.
#
# When a connection to a host is initiated, and that host has
# several IP addresses, the default connection timeout is reduced
# by dividing it by the number of addresses. So, a site with 15
# addresses would then have a timeout of 8 seconds for each
# address attempted. To avoid having the timeout reduced to the
# point where even a working host would not have a chance to
# respond, this setting is provided. The default, and the
# minimum value, is five seconds, and the maximum value is sixty
# seconds, or half of connect_timeout, whichever is greater and
# less than connect_timeout.
#
#Default:
# minimum_retry_timeout 5 seconds
# TAG: maximum_single_addr_tries
# This sets the maximum number of connection attempts for a
# host that only has one address (for multiple-address hosts,
# each address is tried once).
#
# The default value is three tries, the (not recommended)
# maximum is 255 tries. A warning message will be generated
# if it is set to a value greater than ten.
#
#Default:
# maximum_single_addr_tries 3
# TAG: snmp_port
# Squid can now serve statistics and status information via SNMP.
# A value of "0" disables SNMP support. If you wish to use SNMP,
# set this to "3401" to use the normal SNMP port.
#
# NOTE: SNMP support requires use the --enable-snmp configure
# command line option.
#
#Default:
# snmp_port 0
# TAG: snmp_access
# Allowing or denying access to the SNMP port.
#
# All access to the agent is denied by default.
# usage:
#
# snmp_access allowdeny [!]aclname ...
#
#Example:
# snmp_access allow snmppublic localhost
# snmp_access deny all
#
#Default:
# snmp_access deny all
# TAG: snmp_incoming_address
# TAG: snmp_outgoing_address
# Just like 'udp_incoming_address' above, but for the SNMP port.
#
# snmp_incoming_address is used for the SNMP socket receiving
# messages from SNMP agents.
# snmp_outgoing_address is used for SNMP packets returned to SNMP
# agents.
#
# The default snmp_incoming_address (0.0.0.0) is to listen on all
# available network interfaces.
#
# If snmp_outgoing_address is set to 255.255.255.255 (the default)
# then it will use the same socket as snmp_incoming_address. Only
# change this if you want to have SNMP replies sent using another
# address than where this Squid listens for SNMP queries.
#
# NOTE, snmp_incoming_address and snmp_outgoing_address can not have
# the same value since they both use port 3401.
#
#Default:
# snmp_incoming_address 0.0.0.0
# snmp_outgoing_address 255.255.255.255
# TAG: as_whois_server
# WHOIS server to query for AS numbers. NOTE: AS numbers are
# queried only when Squid starts up, not for every request.
#
#Default:
# as_whois_server whois.ra.net
# as_whois_server whois.ra.net
# TAG: wccp_router
# Use this option to define your WCCP ``home'' router for
# Squid. Setting the 'wccp_router' to 0.0.0.0 (the default)
# disables WCCP.
#
#Default:
# wccp_router 0.0.0.0
# TAG: wccp_version
# According to some users, Cisco IOS 11.2 only supports WCCP
# version 3. If you're using that version of IOS, change
# this value to 3.
#
#Default:
# wccp_version 4
# TAG: wccp_incoming_address
# TAG: wccp_outgoing_address
# wccp_incoming_address Use this option if you require WCCP
# messages to be received on only one
# interface. Do NOT use this option if
# you're unsure how many interfaces you
# have, or if you know you have only one
# interface.
#
# wccp_outgoing_address Use this option if you require WCCP
# messages to be sent out on only one
# interface. Do NOT use this option if
# you're unsure how many interfaces you
# have, or if you know you have only one
# interface.
#
# The default behavior is to not bind to any specific address.
#
# NOTE, wccp_incoming_address and wccp_outgoing_address can not have
# the same value since they both use port 2048.
#
#Default:
# wccp_incoming_address 0.0.0.0
# wccp_outgoing_address 255.255.255.255
# DELAY POOL PARAMETERS (all require DELAY_POOLS compilation option)
# -----------------------------------------------------------------------------
# TAG: delay_pools
# This represents the number of delay pools to be used. For example,
# if you have one class 2 delay pool and one class 3 delays pool, you
# have a total of 2 delay pools.
#
# To enable this option, you must use --enable-delay-pools with the
# configure script.
#
#Default:
# delay_pools 0
# TAG: delay_class
# This defines the class of each delay pool. There must be exactly one
# delay_class line for each delay pool. For example, to define two
# delay pools, one of class 2 and one of class 3, the settings above
# and here would be:
#
#Example:
# delay_pools 2 # 2 delay pools
# delay_class 1 2 # pool 1 is a class 2 pool
# delay_class 2 3 # pool 2 is a class 3 pool
#
# The delay pool classes are:
#
# class 1 Everything is limited by a single aggregate
# bucket.
#
# class 2 Everything is limited by a single aggregate
# bucket as well as an "individual" bucket chosen
# from bits 25 through 32 of the IP address.
#
# class 3 Everything is limited by a single aggregate
# bucket as well as a "network" bucket chosen
# from bits 17 through 24 of the IP address and a
# "individual" bucket chosen from bits 17 through
# 32 of the IP address.
#
# NOTE: If an IP address is a.b.c.d
# -> bits 25 through 32 are "d"
# -> bits 17 through 24 are "c"
# -> bits 17 through 32 are "c * 256 + d"
#
#Default:
# none
# TAG: delay_access
# This is used to determine which delay pool a request falls into.
# The first matched delay pool is always used, i.e., if a request falls
# into delay pool number one, no more delay are checked, otherwise the
# rest are checked in order of their delay pool number until they have
# all been checked. For example, if you want some_big_clients in delay
# pool 1 and lotsa_little_clients in delay pool 2:
#
#Example:
# delay_access 1 allow some_big_clients
# delay_access 1 deny all
# delay_access 2 allow lotsa_little_clients
# delay_access 2 deny all
#
#Default:
# none
# TAG: delay_parameters
# This defines the parameters for a delay pool. Each delay pool has
# a number of "buckets" associated with it, as explained in the
# description of delay_class. For a class 1 delay pool, the syntax is:
#
#delay_parameters pool aggregate
#
# For a class 2 delay pool:
#
#delay_parameters pool aggregate individual
#
# For a class 3 delay pool:
#
#delay_parameters pool aggregate network individual
#
# The variables here are:
#
# pool a pool number - ie, a number between 1 and the
# number specified in delay_pools as used in
# delay_class lines.
#
# aggregate the "delay parameters" for the aggregate bucket
# (class 1, 2, 3).
#
# individual the "delay parameters" for the individual
# buckets (class 2, 3).
#
# network the "delay parameters" for the network buckets
# (class 3).
#
# A pair of delay parameters is written restore/maximum, where restore is
# the number of bytes (not bits - modem and network speeds are usually
# quoted in bits) per second placed into the bucket, and maximum is the
# maximum number of bytes which can be in the bucket at any time.
#
# For example, if delay pool number 1 is a class 2 delay pool as in the
# above example, and is being used to strictly limit each host to 64kbps
# (plus overheads), with no overall limit, the line is:
#
#delay_parameters 1 -1/-1 8000/8000
#
# Note that the figure -1 is used to represent "unlimited".
#
# And, if delay pool number 2 is a class 3 delay pool as in the above
# example, and you want to limit it to a total of 256kbps (strict limit)
# with each 8-bit network permitted 64kbps (strict limit) and each
# individual host permitted 4800bps with a bucket maximum size of 64kb
# to permit a decent web page to be downloaded at a decent speed
# (if the network is not being limited due to overuse) but slow down
# large downloads more significantly:
#
#delay_parameters 2 32000/32000 8000/8000 600/64000
#
# There must be one delay_parameters line for each delay pool.
#
#Default:
# none
# TAG: delay_initial_bucket_level (percent, 0-100)
# The initial bucket percentage is used to determine how much is put
# in each bucket when squid starts, is reconfigured, or first notices
# a host accessing it (in class 2 and class 3, individual hosts and
# networks only have buckets associated with them once they have been
# "seen" by squid).
#
#Default:
# delay_initial_bucket_level 50
# TAG: incoming_icp_average
# TAG: incoming_http_average
# TAG: incoming_dns_average
# TAG: min_icp_poll_cnt
# TAG: min_dns_poll_cnt
# TAG: min_http_poll_cnt
# Heavy voodoo here. I can't even believe you are reading this.
# Are you crazy? Don't even think about adjusting these unless
# you understand the algorithms in comm_select.c first!
#
#Default:
# incoming_icp_average 6
# incoming_http_average 4
# incoming_dns_average 4
# min_icp_poll_cnt 8
# min_dns_poll_cnt 8
# min_http_poll_cnt 8
# TAG: max_open_disk_fds
# To avoid having disk as the I/O bottleneck Squid can optionally
# bypass the on-disk cache if more than this amount of disk file
# descriptors are open.
#
# A value of 0 indicates no limit.
#
#Default:
# max_open_disk_fds 0
# TAG: offline_mode
# Enable this option and Squid will never try to validate cached
# objects.
#
#Default:
# offline_mode off
# TAG: uri_whitespace
# What to do with requests that have whitespace characters in the
# URI. Options:
#
# strip: The whitespace characters are stripped out of the URL.
# This is the behavior recommended by RFC2616.
# deny: The request is denied. The user receives an "Invalid
# Request" message.
# allow: The request is allowed and the URI is not changed. The
# whitespace characters remain in the URI. Note the
# whitespace is passed to redirector processes if they
# are in use.
# encode: The request is allowed and the whitespace characters are
# encoded according to RFC1738. This could be considered
# a violation of the HTTP/1.1
# RFC because proxies are not allowed to rewrite URI's.
# chop: The request is allowed and the URI is chopped at the
# first whitespace. This might also be considered a
# violation.
#
#Default:
# uri_whitespace strip
# TAG: broken_posts
# A list of ACL elements which, if matched, causes Squid to send
# a extra CRLF pair after the body of a PUT/POST request.
#
# Some HTTP servers has broken implementations of PUT/POST,
# and rely on a extra CRLF pair sent by some WWW clients.
#
# Quote from RFC 2068 section 4.1 on this matter:
#
# Note: certain buggy HTTP/1.0 client implementations generate an
# extra CRLF's after a POST request. To restate what is explicitly
# forbidden by the BNF, an HTTP/1.1 client must not preface or follow
# a request with an extra CRLF.
#
#Example:
# acl buggy_server url_regex ^http://....
# broken_posts allow buggy_server
#
#Default:
# none
# TAG: mcast_miss_addr
# Note: This option is only available if Squid is rebuilt with the
# -DMULTICAST_MISS_STREAM option
#
# If you enable this option, every "cache miss" URL will
# be sent out on the specified multicast address.
#
# Do not enable this option unless you are are absolutely
# certain you understand what you are doing.
#
#Default:
# mcast_miss_addr 255.255.255.255
# TAG: mcast_miss_ttl
# Note: This option is only available if Squid is rebuilt with the
# -DMULTICAST_MISS_TTL option
#
# This is the time-to-live value for packets multicasted
# when multicasting off cache miss URLs is enabled. By
# default this is set to 'site scope', i.e. 16.
#
#Default:
# mcast_miss_ttl 16
# TAG: mcast_miss_port
# Note: This option is only available if Squid is rebuilt with the
# -DMULTICAST_MISS_STREAM option
#
# This is the port number to be used in conjunction with
# 'mcast_miss_addr'.
#
#Default:
# mcast_miss_port 3135
# TAG: mcast_miss_encode_key
# Note: This option is only available if Squid is rebuilt with the
# -DMULTICAST_MISS_STREAM option
#
# The URLs that are sent in the multicast miss stream are
# encrypted. This is the encryption key.
#
#Default:
# mcast_miss_encode_key XXXXXXXXXXXXXXXX
# TAG: nonhierarchical_direct
# By default, Squid will send any non-hierarchical requests
# (matching hierarchy_stoplist or not cachable request type) direct
# to origin servers.
#
# If you set this to off, then Squid will prefer to send these
# requests to parents.
#
# Note that in most configurations, by turning this off you will only
# add latency to these request without any improvement in global hit
# ratio.
#
# If you are inside an firewall then see never_direct instead of
# this directive.
#
#Default:
# nonhierarchical_direct on
# TAG: prefer_direct
# Normally Squid tries to use parents for most requests. If you by some
# reason like it to first try going direct and only use a parent if
# going direct fails then set this to off.
#
# By combining nonhierarchical_direct off and prefer_direct on you
# can set up Squid to use a parent as a backup path if going direct
# fails.
#
#Default:
# prefer_direct off
# TAG: strip_query_terms
# By default, Squid strips query terms from requested URLs before
# logging. This protects your user's privacy.
#
#Default:
# strip_query_terms on
# TAG: coredump_dir
# By default Squid leaves core files in the first cache_dir
# directory. If you set 'coredump_dir' to a directory
# that exists, Squid will chdir() to that directory at startup
# and coredump files will be left there.
#
#Default:
# none
# TAG: redirector_bypass
# When this is 'on', a request will not go through the
# redirector if all redirectors are busy. If this is 'off'
# and the redirector queue grows too large, Squid will exit
# with a FATAL error and ask you to increase the number of
# redirectors. You should only enable this if the redirectors
# are not critical to your caching system. If you use
# redirectors for access control, and you enable this option,
# then users may have access to pages that they should not
# be allowed to request.
#
#Default:
# redirector_bypass off
# TAG: ignore_unknown_nameservers
# By default Squid checks that DNS responses are received
# from the same IP addresses that they are sent to. If they
# don't match, Squid ignores the response and writes a warning
# message to cache.log. You can allow responses from unknown
# nameservers by setting this option to 'off'.
#
#Default:
# ignore_unknown_nameservers on
# TAG: digest_generation
# Note: This option is only available if Squid is rebuilt with the
# --enable-cache-digests option
#
# This controls whether the server will generate a Cache Digest
# of its contents. By default, Cache Digest generation is
# enabled if Squid is compiled with USE_CACHE_DIGESTS defined.
#
#Default:
# digest_generation on
# TAG: digest_bits_per_entry
# Note: This option is only available if Squid is rebuilt with the
# --enable-cache-digests option
#
# This is the number of bits of the server's Cache Digest which
# will be associated with the Digest entry for a given HTTP
# Method and URL (public key) combination. The default is 5.
#
#Default:
# digest_bits_per_entry 5
# TAG: digest_rebuild_period (seconds)
# Note: This option is only available if Squid is rebuilt with the
# --enable-cache-digests option
#
# This is the number of seconds between Cache Digest rebuilds.
#
#Default:
# digest_rebuild_period 1 hour
# TAG: digest_rewrite_period (seconds)
# Note: This option is only available if Squid is rebuilt with the
# --enable-cache-digests option
#
# This is the number of seconds between Cache Digest writes to
# disk.
#
#Default:
# digest_rewrite_period 1 hour
# TAG: digest_swapout_chunk_size (bytes)
# Note: This option is only available if Squid is rebuilt with the
# --enable-cache-digests option
#
# This is the number of bytes of the Cache Digest to write to
# disk at a time. It defaults to 4096 bytes (4KB), the Squid
# default swap page.
#
#Default:
# digest_swapout_chunk_size 4096 bytes
# TAG: digest_rebuild_chunk_percentage (percent, 0-100)
# Note: This option is only available if Squid is rebuilt with the
# --enable-cache-digests option
#
# This is the percentage of the Cache Digest to be scanned at a
# time. By default it is set to 10% of the Cache Digest.
#
#Default:
# digest_rebuild_chunk_percentage 10
# TAG: chroot
# Use this to have Squid do a chroot() while initializing. This
# also causes Squid to fully drop root privileges after
# initializing. This means, for example, that if you use a HTTP
# port less than 1024 and try to reconfigure, you will get an
# error.
#
#Default:
# none
# TAG: client_persistent_connections
# TAG: server_persistent_connections
# Persistent connection support for clients and servers. By
# default, Squid uses persistent connections (when allowed)
# with its clients and servers. You can use these options to
# disable persistent connections with clients and/or servers.
#
#Default:
# client_persistent_connections on
# server_persistent_connections on
# TAG: pipeline_prefetch
# To boost the performance of pipelined requests to closer
# match that of a non-proxied environment Squid tries to fetch
# up to two requests in parallell from a pipeline.
#
#Default:
# pipeline_prefetch on
# TAG: extension_methods
# Squid only knows about standardized HTTP request methods.
# You can add up to 20 additional "extension" methods here.
#
#Default:
# none
# TAG: high_response_time_warning (msec)
# If the one-minute median response time exceeds this value,
# Squid prints a WARNING with debug level 0 to get the
# administrators attention. The value is in milliseconds.
#
#Default:
# high_response_time_warning 0
# TAG: high_page_fault_warning
# If the one-minute average page fault rate exceeds this
# value, Squid prints a WARNING with debug level 0 to get
# the administrators attention. The value is in page faults
# per second.
#
#Default:
# high_page_fault_warning 0
# TAG: high_memory_warning
# If the memory usage (as determined by mallinfo) exceeds
# value, Squid prints a WARNING with debug level 0 to get
# the administrators attention.
#
#Default:
# high_memory_warning 0
# TAG: store_dir_select_algorithm
# Set this to 'round-robin' as an alternative.
#
#Default:
# store_dir_select_algorithm least-load
# TAG: forward_log
# Note: This option is only available if Squid is rebuilt with the
# -DWIP_FWD_LOG option
#
# Logs the server-side requests.
#
# This is currently work in progress.
#
#Default:
# none
# TAG: ie_refresh onoff
# Microsoft Internet Explorer up until version 5.5 Service
# Pack 1 has an issue with transparent proxies, wherein it
# is impossible to force a refresh. Turning this on provides
# a partial fix to the problem, by causing all IMS-REFRESH
# requests from older IE versions to check the origin server
# for fresh content. This reduces hit ratio by some amount
# (~10% in my experience), but allows users to actually get
# fresh content when they want it. Note that because Squid
# cannot tell if the user is using 5.5 or 5.5SP1, the behavior
# of 5.5 is unchanged from old versions of Squid (i.e. a
# forced refresh is impossible). Newer versions of IE will,
# hopefully, continue to have the new behavior and will be
# handled based on that assumption. This option defaults to
# the old Squid behavior, which is better for hit ratios but
# worse for clients using IE, if they need to be able to
# force fresh content.
#
#Default:
# ie_refresh off
# ------------------
#
# This is the default Squid configuration file. You may wish
# to look at the Squid home page (http://www.squid-cache.org/)
# for the FAQ and other documentation.
#
# The default Squid config file shows what the defaults for
# various options happen to be. If you don't need to change the
# default, you shouldn't uncomment the line. Doing so may cause
# run-time problems. In some cases "none" refers to no default
# setting at all, while in other cases it refers to a valid
# option - the comments for that keyword indicate if this is the
# case.
#
# NETWORK OPTIONS
# -----------------------------------------------------------------------------
# TAG: http_port
# Usage: port
# hostname:port
# 1.2.3.4:port
#
# The socket addresses where Squid will listen for HTTP client
# requests. You may specify multiple socket addresses.
# There are three forms: port alone, hostname with port, and
# IP address with port. If you specify a hostname or IP
# address, then Squid binds the socket to that specific
# address. This replaces the old 'tcp_incoming_address'
# option. Most likely, you do not need to bind to a specific
# address, so you can use the port number alone.
#
# The default port number is 3128.
#
# If you are running Squid in accelerator mode, then you
# probably want to listen on port 80 also, or instead.
#
# The -a command line option will override the *first* port
# number listed here. That option will NOT override an IP
# address, however.
#
# You may specify multiple socket addresses on multiple lines.
#
#Default:
# http_port 3128
http_port 8080
# TAG: icp_port
# The port number where Squid sends and receives ICP queries to
# and from neighbor caches. Default is 3130. To disable use
# "0". May be overridden with -u on the command line.
#
#Default:
# icp_port 3130
icp_port 3130
# TAG: htcp_port
# Note: This option is only available if Squid is rebuilt with the
# --enable-htcp option
#
# The port number where Squid sends and receives HTCP queries to
# and from neighbor caches. Default is 4827. To disable use
# "0".
#
# To enable this option, you must use --enable-htcp with the
# configure script.
#
#Default:
# htcp_port 4827
# TAG: mcast_groups
# This tag specifies a list of multicast groups which your server
# should join to receive multicasted ICP queries.
#
# NOTE! Be very careful what you put here! Be sure you
# understand the difference between an ICP _query_ and an ICP
# _reply_. This option is to be set only if you want to RECEIVE
# multicast queries. Do NOT set this option to SEND multicast
# ICP (use cache_peer for that). ICP replies are always sent via
# unicast, so this option does not affect whether or not you will
# receive replies from multicast group members.
#
# You must be very careful to NOT use a multicast address which
# is already in use by another group of caches.
#
# If you are unsure about multicast, please read the Multicast
# chapter in the Squid FAQ (http://www.squid-cache.org/FAQ/).
#
# Usage: mcast_groups 239.128.16.128 224.0.1.20
#
# By default, Squid doesn't listen on any multicast groups.
#
#Default:
# none
# TAG: tcp_outgoing_address
# TAG: udp_incoming_address
# TAG: udp_outgoing_address
# Usage: tcp_incoming_address 10.20.30.40
# udp_outgoing_address fully.qualified.domain.name
#
# tcp_outgoing_address is used for connections made to remote
# servers and other caches.
# udp_incoming_address is used for the ICP socket receiving packets
# from other caches.
# udp_outgoing_address is used for ICP packets sent out to other
# caches.
#
# The default behavior is to not bind to any specific address.
#
# A *_incoming_address value of 0.0.0.0 indicates that Squid should
# listen on all available interfaces.
#
# If udp_outgoing_address is set to 255.255.255.255 (the default)
# then it will use the same socket as udp_incoming_address. Only
# change this if you want to have ICP queries sent using another
# address than where this Squid listens for ICP queries from other
# caches.
#
# NOTE, udp_incoming_address and udp_outgoing_address can not
# have the same value since they both use port 3130.
#
# NOTE, tcp_incoming_address has been removed. You can now
# specify IP addresses on the 'http_port' line.
#
#Default:
# tcp_outgoing_address 255.255.255.255
# udp_incoming_address 0.0.0.0
# udp_outgoing_address 255.255.255.255
# OPTIONS WHICH AFFECT THE NEIGHBOR SELECTION ALGORITHM
# -----------------------------------------------------------------------------
# TAG: cache_peer
# To specify other caches in a hierarchy, use the format:
#
# cache_peer hostname type http_port icp_port
#
# For example,
#
# # proxy icp
# # hostname type port port options
# # -------------------- -------- ----- ----- -----------
# cache_peer parent.foo.net parent 3128 3130 [proxy-only]
# cache_peer sib1.foo.net sibling 3128 3130 [proxy-only]
# cache_peer sib2.foo.net sibling 3128 3130 [proxy-only]
#
# type: either 'parent', 'sibling', or 'multicast'.
#
# proxy_port: The port number where the cache listens for proxy
# requests.
#
# icp_port: Used for querying neighbor caches about
# objects. To have a non-ICP neighbor
# specify '7' for the ICP port and make sure the
# neighbor machine has the UDP echo port
# enabled in its /etc/inetd.conf file.
#
# options: proxy-only
# weight=n
# ttl=n
# no-query
# default
# round-robin
# multicast-responder
# closest-only
# no-digest
# no-netdb-exchange
# no-delay
# login=user:password
# connect-timeout=nn
# digest-url=url
# allow-miss
#
# use 'proxy-only' to specify that objects fetched
# from this cache should not be saved locally.
#
# use 'weight=n' to specify a weighted parent.
# The weight must be an integer. The default weight
# is 1, larger weights are favored more.
#
# use 'ttl=n' to specify a IP multicast TTL to use
# when sending an ICP queries to this address.
# Only useful when sending to a multicast group.
# Because we don't accept ICP replies from random
# hosts, you must configure other group members as
# peers with the 'multicast-responder' option below.
#
# use 'no-query' to NOT send ICP queries to this
# neighbor.
#
# use 'default' if this is a parent cache which can
# be used as a "last-resort." You should probably
# only use 'default' in situations where you cannot
# use ICP with your parent cache(s).
#
# use 'round-robin' to define a set of parents which
# should be used in a round-robin fashion in the
# absence of any ICP queries.
#
# 'multicast-responder' indicates that the named peer
# is a member of a multicast group. ICP queries will
# not be sent directly to the peer, but ICP replies
# will be accepted from it.
#
# 'closest-only' indicates that, for ICP_OP_MISS
# replies, we'll only forward CLOSEST_PARENT_MISSes
# and never FIRST_PARENT_MISSes.
#
# use 'no-digest' to NOT request cache digests from
# this neighbor.
#
# 'no-netdb-exchange' disables requesting ICMP
# RTT database (NetDB) from the neighbor.
#
# use 'no-delay' to prevent access to this neighbor
# from influencing the delay pools.
#
# use 'login=user:password' if this is a personal/workgroup
# proxy and your parent requires proxy authentication.
#
# use 'connect-timeout=nn' to specify a peer
# specific connect timeout (also see the
# peer_connect_timeout directive)
#
# use 'digest-url=url' to tell Squid to fetch the cache
# digest (if digests are enabled) for this host from
# the specified URL rather than the Squid default
# location.
#
# use 'allow-miss' to disable Squid's use of only-if-cached
# when forwarding requests to siblings. This is primarily
# useful when icp_hit_stale is used by the sibling. To
# extensive use of this option may result in forwarding
# loops, and you should avoid having two-way peerings
# with this option. (for example to deny peer usage on
# requests from peer by denying cache_peer_access if the
# source is a peer)
#
# NOTE: non-ICP neighbors must be specified as 'parent'.
#
#Default:
# none
# TAG: cache_peer_domain
# Use to limit the domains for which a neighbor cache will be
# queried. Usage:
#
# cache_peer_domain cache-host domain [domain ...]
# cache_peer_domain cache-host !domain
#
# For example, specifying
#
# cache_peer_domain parent.foo.net .edu
#
# has the effect such that UDP query packets are sent to
# 'bigserver' only when the requested object exists on a
# server in the .edu domain. Prefixing the domainname
# with '!' means that the cache will be queried for objects
# NOT in that domain.
#
# NOTE: * Any number of domains may be given for a cache-host,
# either on the same or separate lines.
# * When multiple domains are given for a particular
# cache-host, the first matched domain is applied.
# * Cache hosts with no domain restrictions are queried
# for all requests.
# * There are no defaults.
# * There is also a 'cache_peer_access' tag in the ACL
# section.
#
#Default:
# none
# TAG: neighbor_type_domain
# usage: neighbor_type_domain parentsibling domain domain ...
#
# Modifying the neighbor type for specific domains is now
# possible. You can treat some domains differently than the the
# default neighbor type specified on the 'cache_peer' line.
# Normally it should only be necessary to list domains which
# should be treated differently because the default neighbor type
# applies for hostnames which do not match domains listed here.
#
#EXAMPLE:
# cache_peer parent cache.foo.org 3128 3130
# neighbor_type_domain cache.foo.org sibling .com .net
# neighbor_type_domain cache.foo.org sibling .au .de
#
#Default:
# none
# TAG: icp_query_timeout (msec)
# Normally Squid will automatically determine an optimal ICP
# query timeout value based on the round-trip-time of recent ICP
# queries. If you want to override the value determined by
# Squid, set this 'icp_query_timeout' to a non-zero value. This
# value is specified in MILLISECONDS, so, to use a 2-second
# timeout (the old default), you would write:
#
# icp_query_timeout 2000
#
#Default:
# icp_query_timeout 0
# TAG: maximum_icp_query_timeout (msec)
# Normally the ICP query timeout is determined dynamically. But
# sometimes it can lead to very large values (say 5 seconds).
# Use this option to put an upper limit on the dynamic timeout
# value. Do NOT use this option to always use a fixed (instead
# of a dynamic) timeout value. To set a fixed timeout see the
# 'icp_query_timeout' directive.
#
#Default:
# maximum_icp_query_timeout 2000
# TAG: mcast_icp_query_timeout (msec)
# For Multicast peers, Squid regularly sends out ICP "probes" to
# count how many other peers are listening on the given multicast
# address. This value specifies how long Squid should wait to
# count all the replies. The default is 2000 msec, or 2
# seconds.
#
#Default:
# mcast_icp_query_timeout 2000
# TAG: dead_peer_timeout (seconds)
# This controls how long Squid waits to declare a peer cache
# as "dead." If there are no ICP replies received in this
# amount of time, Squid will declare the peer dead and not
# expect to receive any further ICP replies. However, it
# continues to send ICP queries, and will mark the peer as
# alive upon receipt of the first subsequent ICP reply.
#
# This timeout also affects when Squid expects to receive ICP
# replies from peers. If more than 'dead_peer' seconds have
# passed since the last ICP reply was received, Squid will not
# expect to receive an ICP reply on the next query. Thus, if
# your time between requests is greater than this timeout, you
# will see a lot of requests sent DIRECT to origin servers
# instead of to your parents.
#
#Default:
# dead_peer_timeout 10 seconds
# TAG: hierarchy_stoplist
# A list of words which, if found in a URL, cause the object to
# be handled directly by this cache. In other words, use this
# to not query neighbor caches for certain objects. You may
# list this option multiple times.
#
#We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?
# TAG: no_cache
# A list of ACL elements which, if matched, cause the reply to
# immediately removed from the cache. In other words, use this
# to force certain objects to never be cached.
#
# You must use the word 'DENY' to indicate the ACL names which should
# NOT be cached.
#
#We recommend you to use the following two lines.
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
# OPTIONS WHICH AFFECT THE CACHE SIZE
# -----------------------------------------------------------------------------
# TAG: cache_mem (bytes)
# NOTE: THIS PARAMETER DOES NOT SPECIFY THE MAXIMUM PROCESS
# SIZE. IT PLACES A LIMIT ON ONE ASPECT OF SQUID'S MEMORY
# USAGE. SQUID USES MEMORY FOR OTHER THINGS AS WELL.
# YOUR PROCESS WILL PROBABLY BECOME TWICE OR THREE TIMES
# BIGGER THAN THE VALUE YOU PUT HERE
#
# 'cache_mem' specifies the ideal amount of memory to be used
# for:
# * In-Transit objects
# * Hot Objects
# * Negative-Cached objects
#
# Data for these objects are stored in 4 KB blocks. This
# parameter specifies the ideal upper limit on the total size of
# 4 KB blocks allocated. In-Transit objects take the highest
# priority.
#
# In-transit objects have priority over the others. When
# additional space is needed for incoming data, negative-cached
# and hot objects will be released. In other words, the
# negative-cached and hot objects will fill up any unused space
# not needed for in-transit objects.
#
# If circumstances require, this limit will be exceeded.
# Specifically, if your incoming request rate requires more than
# 'cache_mem' of memory to hold in-transit objects, Squid will
# exceed this limit to satisfy the new requests. When the load
# decreases, blocks will be freed until the high-water mark is
# reached. Thereafter, blocks will be used to store hot
# objects.
#
#Default:
# cache_mem 8 MB
cache_mem 128 MB
# TAG: cache_swap_low (percent, 0-100)
# TAG: cache_swap_high (percent, 0-100)
#
# The low- and high-water marks for cache object replacement.
# Replacement begins when the swap (disk) usage is above the
# low-water mark and attempts to maintain utilization near the
# low-water mark. As swap utilization gets close to high-water
# mark object eviction becomes more aggressive. If utilization is
# close to the low-water mark less replacement is done each time.
#
# Defaults are 90% and 95%. If you have a large cache, 5% could be
# hundreds of MB. If this is the case you may wish to set these
# numbers closer together.
#
#Default:
# cache_swap_low 90
# cache_swap_high 95
# TAG: maximum_object_size (bytes)
# Objects larger than this size will NOT be saved on disk. The
# value is specified in kilobytes, and the default is 4MB. If
# you wish to get a high BYTES hit ratio, you should probably
# increase this (one 32 MB object hit counts for 3200 10KB
# hits). If you wish to increase speed more than your want to
# save bandwidth you should leave this low.
#
# NOTE: if using the LFUDA replacement policy you should increase
# this value to maximize the byte hit rate improvement of LFUDA!
# See replacement_policy below for a discussion of this policy.
#
#Default:
# maximum_object_size 4096 KB
# TAG: minimum_object_size (bytes)
# Objects smaller than this size will NOT be saved on disk. The
# value is specified in kilobytes, and the default is 0 KB, which
# means there is no minimum.
#
#Default:
# minimum_object_size 0 KB
# TAG: maximum_object_size_in_memory (bytes)
# Objects greater than this size will not be attempted to kept in
# the memory cache. This should be set high enough to keep objects
# accessed frequently in memory to improve performance whilst low
# enough to keep larger objects from hoarding cache_mem .
#
#Default:
# maximum_object_size_in_memory 8 KB
# TAG: ipcache_size (number of entries)
# TAG: ipcache_low (percent)
# TAG: ipcache_high (percent)
# The size, low-, and high-water marks for the IP cache.
#
#Default:
# ipcache_size 1024
# ipcache_low 90
# ipcache_high 95
# TAG: fqdncache_size (number of entries)
# Maximum number of FQDN cache entries.
#
#Default:
# fqdncache_size 1024
# TAG: cache_replacement_policy
# The cache replacement policy parameter determines which
# objects are evicted (replaced) when disk space is needed.
#
# lru : Squid's original list based LRU policy
# heap GDSF : Greedy-Dual Size Frequency
# heap LFUDA: Least Frequently Used with Dynamic Aging
# heap LRU : LRU policy implemented using a heap
#
# Applies to any cache_dir lines listed below this.
#
# The LRU policies keeps recently referenced objects.
#
# The heap GDSF policy optimizes object hit rate by keeping smaller
# popular objects in cache so it has a better chance of getting a
# hit. It achieves a lower byte hit rate than LFUDA though since
# it evicts larger (possibly popular) objects.
#
# The heap LFUDA policy keeps popular objects in cache regardless of
# their size and thus optimizes byte hit rate at the expense of
# hit rate since one large, popular object will prevent many
# smaller, slightly less popular objects from being cached.
#
# Both policies utilize a dynamic aging mechanism that prevents
# cache pollution that can otherwise occur with frequency-based
# replacement policies.
#
# NOTE: if using the LFUDA replacement policy you should increase
# the value of maximum_object_size above its default of 4096 KB to
# to maximize the potential byte hit rate improvement of LFUDA.
#
# For more information about the GDSF and LFUDA cache replacement
# policies see http://www.hpl.hp.com/techreports/1999/HPL-1999-69.html
# and http://fog.hpl.external.hp.com/techreports/98/HPL-98-173.html.
#
#Default:
# cache_replacement_policy lru
cache_replacement_policy heap GDSF
# TAG: memory_replacement_policy
# The memory replacement policy parameter determines which
# objects are purged from memory when memory space is needed.
#
# See cache_replacement_policy for details.
#
#Default:
# memory_replacement_policy lru
memory_replacement_policy heap GDSF
# LOGFILE PATHNAMES AND CACHE DIRECTORIES
# -----------------------------------------------------------------------------
# TAG: cache_dir
# Usage:
#
# cache_dir Type Directory-Name Fs-specific-data [options]
#
# You can specify multiple cache_dir lines to spread the
# cache among different disk partitions.
#
# Type specifies the kind of storage system to use. Most
# everyone will want to use "ufs" as the type. If you are using
# Async I/O (--enable async-io) on Linux or Solaris, then you may
# want to try "aufs" as the type. Async IO support may be
# buggy, however, so beware.
#
# 'Directory' is a top-level directory where cache swap
# files will be stored. If you want to use an entire disk
# for caching, then this can be the mount-point directory.
# The directory must exist and be writable by the Squid
# process. Squid will NOT create this directory for you.
#
# The ufs store type:
#
# "ufs" is the old well-known Squid storage format that has always
# been there.
#
# cache_dir ufs Directory-Name Mbytes L1 L2 [options]
#
# 'Mbytes' is the amount of disk space (MB) to use under this
# directory. The default is 100 MB. Change this to suit your
# configuration.
#
# 'Level-1' is the number of first-level subdirectories which
# will be created under the 'Directory'. The default is 16.
#
# 'Level-2' is the number of second-level subdirectories which
# will be created under each first-level directory. The default
# is 256.
#
# The aufs store type:
#
# "aufs" uses the same storage format as "ufs", utilizing
# POSIX-threads to avoid blocking the main Squid process on
# disk-I/O. This was formerly known in Squid as async-io.
#
# cache_dir aufs Directory-Name Mbytes L1 L2 [options]
#
# see argument descriptions under ufs above
#
# The diskd store type:
#
# "diskd" uses the same storage format as "ufs", utilizing a
# separate process to avoid blocking the main Squid process on
# disk-I/O.
#
# cache_dir diskd Directory-Name Mbytes L1 L2 [options] [Q1=n] [Q2=n]
#
# see argument descriptions under ufs above
#
# Q1 specifies the number of unacknowledged I/O requests when Squid
# stops opening new files. If this many messages are in the queues,
# Squid won't open new files. Default is 64
#
# Q2 specifies the number of unacknowledged messages when Squid
# starts blocking. If this many messages are in the queues,
# Squid blocks until it recevies some replies. Default is 72
#
# Common options:
#
# read-only, this cache_dir is read only.
#
# max-size=n, refers to the max object size this storedir supports.
# It is used to initially choose the storedir to dump the object.
# Note: To make optimal use of the max-size limits you should order
# the cache_dir lines with the smallest max-size value first and the
# ones with no max-size specification last.
#
#Default:
# cache_dir ufs /var/spool/squid 100 16 256
cache_dir ufs /cache 1000 16 256
# TAG: cache_access_log
# Logs the client request activity. Contains an entry for
# every HTTP and ICP queries received.
#
#Default:
# cache_access_log /var/log/squid/access.log
# TAG: cache_log
# Cache logging file. This is where general information about
# your cache's behavior goes. You can increase the amount of data
# logged to this file with the "debug_options" tag below.
#
#Default:
# cache_log /var/log/squid/cache.log
# TAG: cache_store_log
# Logs the activities of the storage manager. Shows which
# objects are ejected from the cache, and which objects are
# saved and for how long. To disable, enter "none". There are
# not really utilities to analyze this data, so you can safely
# disable it.
#
#Default:
# cache_store_log /var/log/squid/store.log
# TAG: cache_swap_log
# Location for the cache "swap.log." This log file holds the
# metadata of objects saved on disk. It is used to rebuild the
# cache during startup. Normally this file resides in each
# 'cache_dir' directory, but you may specify an alternate
# pathname here. Note you must give a full filename, not just
# a directory. Since this is the index for the whole object
# list you CANNOT periodically rotate it!
#
# If %s can be used in the file name then it will be replaced with a
# a representation of the cache_dir name where each / is replaced
# with '.'. This is needed to allow adding/removing cache_dir
# lines when cache_swap_log is being used.
#
# If have more than one 'cache_dir', and %s is not used in the name
# then these swap logs will have names such as:
#
# cache_swap_log.00
# cache_swap_log.01
# cache_swap_log.02
#
# The numbered extension (which is added automatically)
# corresponds to the order of the 'cache_dir' lines in this
# configuration file. If you change the order of the 'cache_dir'
# lines in this file, then these log files will NOT correspond to
# the correct 'cache_dir' entry (unless you manually rename
# them). We recommend that you do NOT use this option. It is
# better to keep these log files in each 'cache_dir' directory.
#
#Default:
# none
# TAG: emulate_httpd_log onoff
# The Cache can emulate the log file format which many 'httpd'
# programs use. To disable/enable this emulation, set
# emulate_httpd_log to 'off' or 'on'. The default
# is to use the native log format since it includes useful
# information that Squid-specific log analyzers use.
#
#Default:
# emulate_httpd_log off
# TAG: log_ip_on_direct onoff
# Log the destination IP address in the hierarchy log tag when going
# direct. Earlier Squid versions logged the hostname here. If you
# prefer the old way set this to off.
#
#Default:
# log_ip_on_direct on
# TAG: mime_table
# Pathname to Squid's MIME table. You shouldn't need to change
# this, but the default file contains examples and formatting
# information if you do.
#
#Default:
# mime_table /etc/squid/mime.conf
# TAG: log_mime_hdrs onoff
# The Cache can record both the request and the response MIME
# headers for each HTTP transaction. The headers are encoded
# safely and will appear as two bracketed fields at the end of
# the access log (for either the native or httpd-emulated log
# formats). To enable this logging set log_mime_hdrs to 'on'.
#
#Default:
# log_mime_hdrs off
# TAG: useragent_log
# Note: This option is only available if Squid is rebuilt with the
# --enable-useragent-log option
#
# Squid will write the User-Agent field from HTTP requests
# to the filename specified here. By default useragent_log
# is disabled.
#
#Default:
# none
# TAG: referer_log
# Note: This option is only available if Squid is rebuilt with the
# --enable-referer-log option
#
# Squid will write the Referer field from HTTP requests to the
# filename specified here. By default referer_log is disabled.
#
#Default:
# none
# TAG: pid_filename
# A filename to write the process-id to. To disable, enter "none".
#
#Default:
# pid_filename /var/run/squid.pid
# TAG: debug_options
# Logging options are set as section,level where each source file
# is assigned a unique section. Lower levels result in less
# output, Full debugging (level 9) can result in a very large
# log file, so be careful. The magic word "ALL" sets debugging
# levels for all sections. We recommend normally running with
# "ALL,1".
#
#Default:
# debug_options ALL,1
# TAG: log_fqdn onoff
# Turn this on if you wish to log fully qualified domain names
# in the access.log. To do this Squid does a DNS lookup of all
# IP's connecting to it. This can (in some situations) increase
# latency, which makes your cache seem slower for interactive
# browsing.
#
#Default:
# log_fqdn off
# TAG: client_netmask
# A netmask for client addresses in logfiles and cachemgr output.
# Change this to protect the privacy of your cache clients.
# A netmask of 255.255.255.0 will log all IP's in that range with
# the last digit set to '0'.
#
#Default:
# client_netmask 255.255.255.255
# OPTIONS FOR EXTERNAL SUPPORT PROGRAMS
# -----------------------------------------------------------------------------
# TAG: ftp_user
# If you want the anonymous login password to be more informative
# (and enable the use of picky ftp servers), set this to something
# reasonable for your domain, like wwwuser@somewhere.net
#
# The reason why this is domainless by default is that the
# request can be made on the behalf of a user in any domain,
# depending on how the cache is used.
# Some ftp server also validate that the email address is valid
# (for example perl.com).
#
#Default:
# ftp_user Squid@
# TAG: ftp_list_width
# Sets the width of ftp listings. This should be set to fit in
# the width of a standard browser. Setting this too small
# can cut off long filenames when browsing ftp sites.
#
#Default:
# ftp_list_width 32
# TAG: ftp_passive
# If your firewall does not allow Squid to use passive
# connections, then turn off this option.
#
#Default:
# ftp_passive on
# TAG: cache_dns_program
# Note: This option is only available if Squid is rebuilt with the
# --disable-internal-dns option
#
# Specify the location of the executable for dnslookup process.
#
#Default:
# cache_dns_program /usr/lib/squid/
# TAG: dns_children
# Note: This option is only available if Squid is rebuilt with the
# --disable-internal-dns option
#
# The number of processes spawn to service DNS name lookups.
# For heavily loaded caches on large servers, you should
# probably increase this value to at least 10. The maximum
# is 32. The default is 5.
#
# You must have at least one dnsserver process.
#
#Default:
# dns_children 5
# TAG: dns_retransmit_interval
# Initial retransmit interval for DNS queries. The interval is
# doubled each time all configured DNS servers have been tried.
#
#
#Default:
# dns_retransmit_interval 5 seconds
# TAG: dns_timeout
# DNS Query timeout. If no response is received to a DNS query
# within this time then all DNS servers for the queried domain
# is assumed to be unavailable.
#
#Default:
# dns_timeout 5 minutes
# TAG: dns_defnames onoff
# Note: This option is only available if Squid is rebuilt with the
# --disable-internal-dns option
#
# Normally the 'dnsserver' disables the RES_DEFNAMES resolver
# option (see res_init(3)). This prevents caches in a hierarchy
# from interpreting single-component hostnames locally. To allow
# dnsserver to handle single-component names, enable this
# option.
#
#Default:
# dns_defnames off
# TAG: dns_nameservers
# Use this if you want to specify a list of DNS name servers
# (IP addresses) to use instead of those given in your
# /etc/resolv.conf file.
#
# Example: dns_nameservers 10.0.0.1 192.172.0.4
#
#Default:
# none
# TAG: diskd_program
# Specify the location of the diskd executable.
# Note that this is only useful if you have compiled in
# diskd as one of the store io modules.
#
#Default:
# diskd_program /usr/lib/squid/diskd
# TAG: unlinkd_program
# Specify the location of the executable for file deletion process.
#
#Default:
# unlinkd_program /usr/lib/squid/unlinkd
# TAG: pinger_program
# Note: This option is only available if Squid is rebuilt with the
# --enable-icmp option
#
# Specify the location of the executable for the pinger process.
# This is only useful if you configured Squid (during compilation)
# with the '--enable-icmp' option.
#
#Default:
# pinger_program /usr/lib/squid/
# TAG: redirect_program
# Specify the location of the executable for the URL redirector.
# Since they can perform almost any function there isn't one included.
# See the Release-Notes for information on how to write one.
# By default, a redirector is not used.
#
#Default:
# none
# TAG: redirect_children
# The number of redirector processes to spawn. If you start
# too few Squid will have to wait for them to process a backlog of
# URLs, slowing it down. If you start too many they will use RAM
# and other system resources.
#
#Default:
# redirect_children 5
# TAG: redirect_rewrites_host_header
# By default Squid rewrites any Host: header in redirected
# requests. If you are running a accelerator then this may
# not be a wanted effect of a redirector.
#
#Default:
# redirect_rewrites_host_header on
# TAG: redirector_access
# If defined, this access list specifies which requests are
# sent to the redirector processes. By default all requests
# are sent.
#
#Default:
# none
# TAG: authenticate_program
# Specify the command for the external authenticator. Such a
# program reads a line containing "username password" and replies
# "OK" or "ERR" in an endless loop. If you use an authenticator,
# make sure you have 1 acl of type proxy_auth. By default, the
# authenticator_program is not used.
#
# If you want to use the traditional proxy authentication,
# jump over to the ../auth_modules/NCSA directory and
# type:
# % make
# % make install
#
# Then, set this line to something like
#
# authenticate_program /usr/bin/ncsa_auth /usr/etc/passwd
#
#Default:
# none
# TAG: authenticate_children
# The number of authenticator processes to spawn (default 5). If you
# start too few Squid will have to wait for them to process a backlog
# of usercode/password verifications, slowing it down. When password
# verifications are done via a (slow) network you are likely to need
# lots of authenticator processes.
#
#Default:
# authenticate_children 5
# TAG: authenticate_ttl
# The time a checked username/password combination remains cached.
# If a wrong password is given for a cached user, the user gets
# removed from the username/password cache forcing a revalidation.
#
#Default:
# authenticate_ttl 1 hour
# TAG: authenticate_ip_ttl
# With this option you control how long a proxy authentication
# will be bound to a specific IP address. If a request using
# the same user name is received during this time then access
# will be denied and both users are required to reauthenticate
# them selves. The idea behind this is to make it annoying
# for people to share their password to their friends, but
# yet allow a dialup user to reconnect on a different dialup
# port.
#
# The default is 0 to disable the check. Recommended value
# if you have dialup users are no more than 60 seconds to allow
# the user to redial without hassle. If all your users are
# stationary then higher values may be used.
#
# See also authenticate_ip_ttl_is_strict
#
#Default:
# authenticate_ip_ttl 0 seconds
# TAG: authenticate_ip_ttl_is_strict
# This option makes authenticate_ip_ttl a bit stricted. With this
# enabled authenticate_ip_ttl will deny all access from other IP
# addresses until the TTL has expired, and the IP address "owning"
# the userid will not be forced to reauthenticate.
#
#Default:
# authenticate_ip_ttl_is_strict on
# OPTIONS FOR TUNING THE CACHE
# -----------------------------------------------------------------------------
# TAG: wais_relay_host
# TAG: wais_relay_port
# Relay WAIS request to host (1st arg) at port (2 arg).
#
#Default:
# wais_relay_port 0
# TAG: request_header_max_size (KB)
# This specifies the maximum size for HTTP headers in a request.
# Request headers are usually relatively small (about 512 bytes).
# Placing a limit on the request header size will catch certain
# bugs (for example with persistent connections) and possibly
# buffer-overflow or denial-of-service attacks.
#
#Default:
# request_header_max_size 10 KB
# TAG: request_body_max_size (KB)
# This specifies the maximum size for an HTTP request body.
# In other words, the maximum size of a PUT/POST request.
# A user who attempts to send a request with a body larger
# than this limit receives an "Invalid Request" error message.
# If you set this parameter to a zero, there will be no limit
# imposed.
#
#Default:
# request_body_max_size 1 MB
# TAG: reply_body_max_size (KB)
# This option specifies the maximum size of a reply body. It
# can be used to prevent users from downloading very large files,
# such as MP3's and movies. The reply size is checked twice.
# First when we get the reply headers, we check the
# content-length value. If the content length value exists and
# is larger than this parameter, the request is denied and the
# user receives an error message that says "the request or reply
# is too large." If there is no content-length, and the reply
# size exceeds this limit, the client's connection is just closed
# and they will receive a partial reply.
#
# NOTE: downstream caches probably can not detect a partial reply
# if there is no content-length header, so they will cache
# partial responses and give them out as hits. You should NOT
# use this option if you have downstream caches.
#
# If you set this parameter to zero (the default), there will be
# no limit imposed.
#
#Default:
# reply_body_max_size 0
# TAG: refresh_pattern
# usage: refresh_pattern [-i] regex min percent max [options]
#
# By default, regular expressions are CASE-SENSITIVE. To make
# them case-insensitive, use the -i option.
#
# 'Min' is the time (in minutes) an object without an explicit
# expiry time should be considered fresh. The recommended
# value is 0, any higher values may cause dynamic applications
# to be erroneously cached unless the application designer
# has taken the appropriate actions.
#
# 'Percent' is a percentage of the objects age (time since last
# modification age) an object without explicit expiry time
# will be considered fresh.
#
# 'Max' is an upper limit on how long objects without an explicit
# expiry time will be considered fresh.
#
# options: overrsde-expire
# override-lastmod
# reload-into-ims
# ignore-reload
#
# override-expire enforces min age even if the server
# sent a Expires: header. Doing this VIOLATES the HTTP
# standard. Enabling this feature could make you liable
# for problems which it causes.
#
# override-lastmod enforces min age even on objects
# that was modified recently.
#
# reload-into-ims changes client no-cache or ``reload''
# to If-Modified-Since requests. Doing this VIOLATES the
# HTTP standard. Enabling this feature could make you
# liable for problems which it causes.
#
# ignore-reload ignores a client no-cache or ``reload''
# header. Doing this VIOLATES the HTTP standard. Enabling
# this feature could make you liable for problems which
# it causes.
#
# Please see the file doc/Release-Notes-1.1.txt for a full
# description of Squid's refresh algorithm. Basically a
# cached object is: (the order is changed from 1.1.X)
#
# FRESH if expires < now, else STALE
# STALE if age > max
# FRESH if lm-factor < percent, else STALE
# FRESH if age < min
# else STALE
#
# The refresh_pattern lines are checked in the order listed here.
# The first entry which matches is used. If none of the entries
# match, then the default will be used.
#
# Note, you must uncomment all the default lines if you want
# to change one. The default setting is only active if none is
# used.
#
#Default:
# refresh_pattern ^ftp: 1440 20% 10080
# refresh_pattern ^gopher: 1440 0% 1440
# refresh_pattern . 0 20% 4320
# TAG: reference_age
# As a part of normal operation, Squid performs Least Recently
# Used removal of cached objects. The LRU age for removal is
# computed dynamically, based on the amount of disk space in
# use. The dynamic value can be seen in the Cache Manager 'info'
# output.
#
# The 'reference_age' parameter defines the maximum LRU age. For
# example, setting reference_age to '1 week' will cause objects
# to be removed if they have not been accessed for a week or
# more. The default value is one year.
#
# Specify a number here, followed by units of time. For example:
# 1 week
# 3.5 days
# 4 months
# 2.2 hours
#
# NOTE: this parameter is not used when using the enhanced
# replacement policies, GDSH or LFUDA.
#
#Default:
# reference_age 1 year
# TAG: quick_abort_min (KB)
# TAG: quick_abort_max (KB)
# TAG: quick_abort_pct (percent)
# The cache can be configured to continue downloading aborted
# requests. This may be undesirable on slow (e.g. SLIP) links
# and/or very busy caches. Impatient users may tie up file
# descriptors and bandwidth by repeatedly requesting and
# immediately aborting downloads.
#
# When the user aborts a request, Squid will check the
# quick_abort values to the amount of data transfered until
# then.
#
# If the transfer has less than 'quick_abort_min' KB remaining,
# it will finish the retrieval. Setting 'quick_abort_min' to -1
# will disable the quick_abort feature.
#
# If the transfer has more than 'quick_abort_max' KB remaining,
# it will abort the retrieval.
#
# If more than 'quick_abort_pct' of the transfer has completed,
# it will finish the retrieval.
#
#Default:
# quick_abort_min 16 KB
# quick_abort_max 16 KB
# quick_abort_pct 95
# TAG: negative_ttl time-units
# Time-to-Live (TTL) for failed requests. Certain types of
# failures (such as "connection refused" and "404 Not Found") are
# negatively-cached for a configurable amount of time. The
# default is 5 minutes. Note that this is different from
# negative caching of DNS lookups.
#
#Default:
# negative_ttl 5 minutes
# TAG: positive_dns_ttl time-units
# Time-to-Live (TTL) for positive caching of successful DNS lookups.
# Default is 6 hours (360 minutes). If you want to minimize the
# use of Squid's ipcache, set this to 1, not 0.
#
#Default:
# positive_dns_ttl 6 hours
# TAG: negative_dns_ttl time-units
# Time-to-Live (TTL) for negative caching of failed DNS lookups.
#
#Default:
# negative_dns_ttl 5 minutes
# TAG: range_offset_limit (bytes)
# Sets a upper limit on how far into the the file a Range request
# may be to cause Squid to prefetch the whole file. If beyond this
# limit then Squid forwards the Range request as it is and the result
# is NOT cached.
#
# This is to stop a far ahead range request (lets say start at 17MB)
# from making Squid fetch the whole object up to that point before
# sending anything to the client.
#
# A value of -1 causes Squid to always fetch the object from the
# beginning so that it may cache the result. (2.0 style)
#
# A value of 0 causes Squid to never fetch more than the
# client requested. (default)
#
#Default:
# range_offset_limit 0 KB
# TIMEOUTS
# -----------------------------------------------------------------------------
# TAG: connect_timeout time-units
# Some systems (notably Linux) can not be relied upon to properly
# time out connect(2) requests. Therefore the Squid process
# enforces its own timeout on server connections. This parameter
# specifies how long to wait for the connect to complete. The
# default is two minutes (120 seconds).
#
#Default:
# connect_timeout 2 minutes
# TAG: peer_connect_timeout time-units
# This parameter specifies how long to wait for a pending TCP
# connection to a peer cache. The default is 30 seconds. You
# may also set different timeout values for individual neighbors
# with the 'connect-timeout' option on a 'cache_peer' line.
#
#Default:
# peer_connect_timeout 30 seconds
# TAG: siteselect_timeout time-units
# For URN to multiple URL's URL selection
#
#Default:
# siteselect_timeout 4 seconds
# TAG: read_timeout time-units
# The read_timeout is applied on server-side connections. After
# each successful read(), the timeout will be extended by this
# amount. If no data is read again after this amount of time,
# the request is aborted and logged with ERR_READ_TIMEOUT. The
# default is 15 minutes.
#
#Default:
# read_timeout 15 minutes
# TAG: request_timeout
# How long to wait for an HTTP request after connection
# establishment. For persistent connections, wait this long
# after the previous request completes.
#
#Default:
# request_timeout 30 seconds
# TAG: client_lifetime time-units
# The maximum amount of time that a client (browser) is allowed to
# remain connected to the cache process. This protects the Cache
# from having a lot of sockets (and hence file descriptors) tied up
# in a CLOSE_WAIT state from remote clients that go away without
# properly shutting down (either because of a network failure or
# because of a poor client implementation). The default is one
# day, 1440 minutes.
#
# NOTE: The default value is intended to be much larger than any
# client would ever need to be connected to your cache. You
# should probably change client_lifetime only as a last resort.
# If you seem to have many client connections tying up
# filedescriptors, we recommend first tuning the read_timeout,
# request_timeout, pconn_timeout and quick_abort values.
#
#Default:
# client_lifetime 1 day
# TAG: half_closed_clients
# Some clients may shutdown the sending side of their TCP
# connections, while leaving their receiving sides open. Sometimes,
# Squid can not tell the difference between a half-closed and a
# fully-closed TCP connection. By default, half-closed client
# connections are kept open until a read(2) or write(2) on the
# socket returns an error. Change this option to 'off' and Squid
# will immediately close client connections when read(2) returns
# "no more data to read."
#
#Default:
# half_closed_clients on
# TAG: pconn_timeout
# Timeout for idle persistent connections to servers and other
# proxies.
#
#Default:
# pconn_timeout 120 seconds
# TAG: ident_timeout
# Maximum time to wait for IDENT requests. If this is too high,
# and you enabled 'ident_lookup', then you might be susceptible
# to denial-of-service by having many ident requests going at
# once.
#
# Only src type ACL checks are fully supported. A src_domain
# ACL might work at times, but it will not always provide
# the correct result.
#
# This option may be disabled by using --disable-ident with
# the configure script.
#
#Default:
# ident_timeout 10 seconds
# TAG: shutdown_lifetime time-units
# When SIGTERM or SIGHUP is received, the cache is put into
# "shutdown pending" mode until all active sockets are closed.
# This value is the lifetime to set for all open descriptors
# during shutdown mode. Any active clients after this many
# seconds will receive a 'timeout' message.
#
#Default:
# shutdown_lifetime 30 seconds
# ACCESS CONTROLS
# -----------------------------------------------------------------------------
# TAG: acl
# Defining an Access List
#
# acl aclname acltype string1 ...
# acl aclname acltype "file" ...
#
# when using "file", the file should contain one item per line
#
# acltype is one of src dst srcdomain dstdomain url_pattern
# urlpath_pattern time port proto method browser user
#
# By default, regular expressions are CASE-SENSITIVE. To make
# them case-insensitive, use the -i option.
#
# acl aclname src ip-address/netmask ... (clients IP address)
# acl aclname src addr1-addr2/netmask ... (range of addresses)
# acl aclname dst ip-address/netmask ... (URL host's IP address)
# acl aclname myip ip-address/netmask ... (local socket IP address)
#
# acl aclname srcdomain .foo.com ... # reverse lookup, client IP
# acl aclname dstdomain .foo.com ... # Destination server from URL
# acl aclname srcdom_regex [-i] xxx ... # regex matching client name
# acl aclname dstdom_regex [-i] xxx ... # regex matching server
# # For dstdomain and dstdom_regex a reverse lookup is tried if a IP
# # based URL is used. The name "none" is used if the reverse lookup
# # fails.
#
# acl aclname time [day-abbrevs] [h1:m1-h2:m2]
# day-abbrevs:
# S - Sunday
# M - Monday
# T - Tuesday
# W - Wednesday
# H - Thursday
# F - Friday
# A - Saturday
# h1:m1 must be less than h2:m2
# acl aclname url_regex [-i] ^http:// ... # regex matching on whole URL
# acl aclname urlpath_regex [-i] \.gif$ ... # regex matching on URL path
# acl aclname port 80 70 21 ...
# acl aclname port 0-1024 ... # ranges allowed
# acl aclname myport 3128 ... # (local socket TCP port)
# acl aclname proto HTTP FTP ...
# acl aclname method GET POST ...
# acl aclname browser [-i] regexp
# # pattern match on User-Agent header
# acl aclname ident username ...
# acl aclname ident_regex [-i] pattern ...
# # string match on ident output.
# # use REQUIRED to accept any non-null ident.
# acl aclname src_as number ...
# acl aclname dst_as number ...
# # Except for access control, AS numbers can be used for
# # routing of requests to specific caches. Here's an
# # example for routing all requests for AS#1241 and only
# # those to mycache.mydomain.net:
# # acl asexample dst_as 1241
# # cache_peer_access mycache.mydomain.net allow asexample
# # cache_peer_access mycache_mydomain.net deny all
#
# acl aclname proxy_auth username ...
# acl aclname proxy_auth_regex [-i] pattern ...
# # list of valid usernames
# # use REQUIRED to accept any valid username.
# #
# # NOTE: when a Proxy-Authentication header is sent but it is not
# # needed during ACL checking the username is NOT logged
# # in access.log.
# #
# # NOTE: proxy_auth requires a EXTERNAL authentication program
# # to check username/password combinations (see
# # authenticate_program).
# #
# # WARNING: proxy_auth can't be used in a transparent proxy. It
# # collides with any authentication done by origin servers. It may
# # seem like it works at first, but it doesn't.
#
# acl aclname snmp_community string ...
# # A community string to limit access to your SNMP Agent
# # Example:
# #
# # acl snmppublic snmp_community public
#
# acl aclname maxconn number
# # This will be matched when the client's IP address has
# # more than
#
# acl req_mime_type mime-type1 ...
# # regex match agains the mime type of the request generated
# # by the client. Can be used to detect file upload or some
# # types HTTP tunelling requests.
# # NOTE: This does NOT match the reply. You cannot use this
# # to match the returned file type.
#
#Examples:
#acl myexample dst_as 1241
#acl password proxy_auth REQUIRED
#acl fileupload req_mime_type -i ^multipart/form-data$
#
#Recommended minimum configuration:
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localnet src 192.168.1.0/255.255.255.0
acl localhost src 127.0.0.1/255.255.255.255
acl SSL_ports port 443 563
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
# TAG: http_access
# Allowing or Denying access based on defined access lists
#
# Access to the HTTP port:
# http_access allowdeny [!]aclname ...
#
# NOTE on default values:
#
# If there are no "access" lines present, the default is to deny
# the request.
#
# If none of the "access" lines cause a match, the default is the
# opposite of the last line in the list. If the last line was
# deny, then the default is allow. Conversely, if the last line
# is allow, the default will be deny. For these reasons, it is a
# good idea to have an "deny all" or "allow all" entry at the end
# of your access lists to avoid potential confusion.
#
#Default:
# http_access deny all
#
#Recommended minimum configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
# Deny requests to unknown ports
http_access deny !Safe_ports
# Deny CONNECT to other than SSL ports
http_access deny CONNECT !SSL_ports
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
# And finally deny all other access to this proxy
http_access allow localnet
http_access allow localhost
http_access deny all
# TAG: icp_access
# Allowing or Denying access to the ICP port based on defined
# access lists
#
# icp_access allowdeny [!]aclname ...
#
# See http_access for details
#
#Default:
# icp_access deny all
#
#Allow ICP queries from eveyone
icp_access allow all
# TAG: miss_access
# Use to force your neighbors to use you as a sibling instead of
# a parent. For example:
#
# acl localclients src 172.16.0.0/16
# miss_access allow localclients
# miss_access deny !localclients
#
# This means that only your local clients are allowed to fetch
# MISSES and all other clients can only fetch HITS.
#
# By default, allow all clients who passed the http_access rules
# to fetch MISSES from us.
#
#Default setting:
# miss_access allow all
# TAG: cache_peer_access
# Similar to 'cache_peer_domain' but provides more flexibility by
# using ACL elements.
#
# cache_peer_access cache-host allowdeny [!]aclname ...
#
# The syntax is identical to 'http_access' and the other lists of
# ACL elements. See the comments for 'http_access' below, or
# the Squid FAQ (http://www.squid-cache.org/FAQ/FAQ-10.html).
#
#Default:
# none
# TAG: proxy_auth_realm
# Specifies the realm name which is to be reported to the client for
# proxy authentication (part of the text the user will see when
# prompted their username and password).
#
#Default:
# proxy_auth_realm Squid proxy-caching web server
# TAG: ident_lookup_access
# A list of ACL elements which, if matched, cause an ident
# (RFC 931) lookup to be performed for this request. For
# example, you might choose to always perform ident lookups
# for your main multi-user Unix boxes, but not for your Macs
# and PCs. By default, ident lookups are not performed for
# any requests.
#
# To enable ident lookups for specific client addresses, you
# can follow this example:
#
# acl ident_aware_hosts src 198.168.1.0/255.255.255.0
# ident_lookup_access allow ident_aware_hosts
# ident_lookup_access deny all
#
# This option may be disabled by using --disable-ident with
# the configure script.
#
#Default:
# ident_lookup_access deny all
# ADMINISTRATIVE PARAMETERS
# -----------------------------------------------------------------------------
# TAG: cache_mgr
# Email-address of local cache manager who will receive
# mail if the cache dies. The default is "webmaster."
#cache_mgr root
#
#Default:
# cache_mgr root
cache_mgr root
# TAG: cache_effective_user
# TAG: cache_effective_group
#
# If the cache is run as root, it will change its effective/real
# UID/GID to the UID/GID specified below. The default is to
# change to UID to squid and GID to squid.
#
# If Squid is not started as root, the default is to keep the
# current UID/GID. Note that if Squid is not started as root then
# you cannot set http_port to a value lower than 1024.
#
#cache_effective_user squid
#cache_effective_group squid
#
#Default:
# cache_effective_user squid
# cache_effective_group squid
cache_effective_user squid
cache_effective_group squid
# TAG: visible_hostname
# If you want to present a special hostname in error messages, etc,
# then define this. Otherwise, the return value of gethostname()
# will be used. If you have multiple caches in a cluster and
# get errors about IP-forwarding you must set them to have individual
# names with this setting.
#
#Default:
# none
# TAG: unique_hostname
# If you want to have multiple machines with the same
# 'visible_hostname' then you must give each machine a different
# 'unique_hostname' so that forwarding loops can be detected.
#
#Default:
# none
# TAG: hostname_aliases
# A list of other DNS names that your cache has.
#
#Default:
# none
# OPTIONS FOR THE CACHE REGISTRATION SERVICE
# -----------------------------------------------------------------------------
#
# This section contains parameters for the (optional) cache
# announcement service. This service is provided to help
# cache administrators locate one another in order to join or
# create cache hierarchies.
#
# An 'announcement' message is sent (via UDP) to the registration
# service by Squid. By default, the announcement message is NOT
# SENT unless you enable it with 'announce_period' below.
#
# The announcement message includes your hostname, plus the
# following information from this configuration file:
#
# http_port
# icp_port
# cache_mgr
#
# All current information is processed regularly and made
# available on the Web at http://www.ircache.net/Cache/Tracker/.
# TAG: announce_period
# This is how frequently to send cache announcements. The
# default is `0' which disables sending the announcement
# messages.
#
# To enable announcing your cache, just uncomment the line
# below.
#
#Default:
# announce_period 0
#
#To enable announcing your cache, just uncomment the line below.
#announce_period 1 day
# TAG: announce_host
# TAG: announce_file
# TAG: announce_port
# announce_host and announce_port set the hostname and port
# number where the registration message will be sent.
#
# Hostname will default to 'tracker.ircache.net' and port will
# default default to 3131. If the 'filename' argument is given,
# the contents of that file will be included in the announce
# message.
#
#Default:
# announce_host tracker.ircache.net
# announce_port 3131
# HTTPD-ACCELERATOR OPTIONS
# -----------------------------------------------------------------------------
# TAG: httpd_accel_host
# TAG: httpd_accel_port
# If you want to run Squid as an httpd accelerator, define the
# host name and port number where the real HTTP server is.
#
# If you want virtual host support then specify the hostname
# as "virtual".
#
# If you want virtual port support then specify the port as "0".
#
# NOTE: enabling httpd_accel_host disables proxy-caching and
# ICP. If you want these features enabled also, then set
# the 'httpd_accel_with_proxy' option.
#
#Default:
# httpd_accel_port 80
# TAG: httpd_accel_single_host onoff
# If you are running Squid as a accelerator and have a single backend
# server then set this to on. This causes Squid to forward the request
# to this server irregardles of what any redirectors or Host headers
# says.
#
# Leave this at off if you have multiple backend servers, and use a
# redirector (or host table or private DNS) to map the requests to the
# appropriate backend servers. Note that the mapping needs to be a
# 1-1 mapping between requested and backend (from redirector) domain
# names or caching will fail, as cacing is performed using the
# URL returned from the redirector.
#
# See also redirect_rewrites_host_header.
#
#Default:
# httpd_accel_single_host off
# TAG: httpd_accel_with_proxy onoff
# If you want to use Squid as both a local httpd accelerator
# and as a proxy, change this to 'on'. Note however that your
# proxy users may have trouble to reach the accelerated domains
# unless their browsers are configured not to use this proxy for
# those domains (for example via the no_proxy browser configuration
# setting)
#
#Default:
# httpd_accel_with_proxy off
# TAG: httpd_accel_uses_host_header onoff
# HTTP/1.1 requests include a Host: header which is basically the
# hostname from the URL. Squid can be an accelerator for
# different HTTP servers by looking at this header. However,
# Squid does NOT check the value of the Host header, so it opens
# a big security hole. We recommend that this option remain
# disabled unless you are sure of what you are doing.
#
# However, you will need to enable this option if you run Squid
# as a transparent proxy. Otherwise, virtual servers which
# require the Host: header will not be properly cached.
#
#Default:
# httpd_accel_uses_host_header off
# MISCELLANEOUS
# -----------------------------------------------------------------------------
# TAG: dns_testnames
# The DNS tests exit as soon as the first site is successfully looked up
#
# This test can be disabled with the -D command line option.
#
#Default:
# dns_testnames netscape.com internic.net nlanr.net microsoft.com
# TAG: logfile_rotate
# Specifies the number of logfile rotations to make when you
# type 'squid -k rotate'. The default is 10, which will rotate
# with extensions 0 through 9. Setting logfile_rotate to 0 will
# disable the rotation, but the logfiles are still closed and
# re-opened. This will enable you to rename the logfiles
# yourself just before sending the rotate signal.
#
# Note, the 'squid -k rotate' command normally sends a USR1
# signal to the running squid process. In certain situations
# (e.g. on Linux with Async I/O), USR1 is used for other
# purposes, so -k rotate uses another signal. It is best to get
# in the habit of using 'squid -k rotate' instead of 'kill -USR1
#
#
#logfile_rotate 0
#
#Default:
# logfile_rotate 0
logfile_rotate 0
# TAG: append_domain
# Appends local domain name to hostnames without any dots in
# them. append_domain must begin with a period.
#
#Example:
# append_domain .yourdomain.com
#
#Default:
# none
# TAG: tcp_recv_bufsize (bytes)
# Size of receive buffer to set for TCP sockets. Probably just
# as easy to change your kernel's default. Set to zero to use
# the default buffer size.
#
#Default:
# tcp_recv_bufsize 0 bytes
# TAG: err_html_text
# HTML text to include in error messages. Make this a "mailto"
# URL to your admin address, or maybe just a link to your
# organizations Web page.
#
# To include this in your error messages, you must rewrite
# the error template files (found in the "errors" directory).
# Wherever you want the 'err_html_text' line to appear,
# insert a %L tag in the error template file.
#
#Default:
# none
# TAG: deny_info
# Usage: deny_info err_page_name acl
# Example: deny_info ERR_CUSTOM_ACCESS_DENIED bad_guys
#
# This can be used to return a ERR_ page for requests which
# do not pass the 'http_access' rules. A single ACL will cause
# the http_access check to fail. If a 'deny_info' line exists
# for that ACL then Squid returns a corresponding error page.
#
# You may use ERR_ pages that come with Squid or create your own pages
# and put them into the configured errors/ directory.
#
#Default:
# none
# TAG: memory_pools onoff
# If set, Squid will keep pools of allocated (but unused) memory
# available for future use. If memory is a premium on your
# system and you believe your malloc library outperforms Squid
# routines, disable this.
#
#Default:
# memory_pools on
# TAG: memory_pools_limit (bytes)
# Used only with memory_pools on:
# memory_pools_limit 50 MB
#
# If set to a non-zero value, Squid will keep at most the specified
# limit of allocated (but unused) memory in memory pools. All free()
# requests that exceed this limit will be handled by your malloc
# library. Squid does not pre-allocate any memory, just safe-keeps
# objects that otherwise would be free()d. Thus, it is safe to set
# memory_pools_limit to a reasonably high value even if your
# configuration will use less memory.
#
# If not set (default) or set to zero, Squid will keep all memory it
# can. That is, there will be no limit on the total amount of memory
# used for safe-keeping.
#
# To disable memory allocation optimization, do not set
# memory_pools_limit to 0. Set memory_pools to "off" instead.
#
# An overhead for maintaining memory pools is not taken into account
# when the limit is checked. This overhead is close to four bytes per
# object kept. However, pools may actually _save_ memory because of
# reduced memory thrashing in your malloc library.
#
#Default:
# none
# TAG: forwarded_for onoff
# If set, Squid will include your system's IP address or name
# in the HTTP requests it forwards. By default it looks like
# this:
#
# X-Forwarded-For: 192.1.2.3
#
# If you disable this, it will appear as
#
# X-Forwarded-For: unknown
#
#Default:
# forwarded_for on
# TAG: log_icp_queries onoff
# If set, ICP queries are logged to access.log. You may wish
# do disable this if your ICP load is VERY high to speed things
# up or to simplify log analysis.
#
#Default:
# log_icp_queries on
log_icp_queries off
# TAG: icp_hit_stale onoff
# If you want to return ICP_HIT for stale cache objects, set this
# option to 'on'. If you have sibling relationships with caches
# in other administrative domains, this should be 'off'. If you only
# have sibling relationships with caches under your control, then
# it is probably okay to set this to 'on'.
#
#Default:
# icp_hit_stale off
# TAG: minimum_direct_hops
# If using the ICMP pinging stuff, do direct fetches for sites
# which are no more than this many hops away.
#
#Default:
# minimum_direct_hops 4
# TAG: minimum_direct_rtt
# If using the ICMP pinging stuff, do direct fetches for sites
# which are no more than this many rtt milliseconds away.
#
#Default:
# minimum_direct_rtt 400
# TAG: cachemgr_passwd
# Specify passwords for cachemgr operations.
#
# Usage: cachemgr_passwd password action action ...
#
# Some valid actions are (see cache manager menu for a full list):
# 5min
# 60min
# asndb
# authenticator
# cbdata
# client_list
# comm_incoming
# config *
# counters
# delay
# digest_stats
# dns
# events
# filedescriptors
# fqdncache
# histograms
# http_headers
# info
# io
# ipcache
# mem
# menu
# netdb
# non_peers
# objects
# pconn
# peer_select
# redirector
# refresh
# server_list
# shutdown *
# store_digest
# storedir
# utilization
# via_headers
# vm_objects
#
# * Indicates actions which will not be performed without a
# valid password, others can be performed if not listed here.
#
# To disable an action, set the password to "disable".
# To allow performing an action without a password, set the
# password to "none".
#
# Use the keyword "all" to set the same password for all actions.
#
#Example:
# cachemgr_passwd secret shutdown
# cachemgr_passwd lesssssssecret info stats/objects
# cachemgr_passwd disable all
#
#Default:
# none
cachemgr_passwd my-secret-pass all
# TAG: store_avg_object_size (kbytes)
# Average object size, used to estimate number of objects your
# cache can hold. See doc/Release-Notes-1.1.txt. The default is
# 13 KB.
#
#Default:
# store_avg_object_size 13 KB
# TAG: store_objects_per_bucket
# Target number of objects per bucket in the store hash table.
# Lowering this value increases the total number of buckets and
# also the storage maintenance rate. The default is 50.
#
#Default:
# store_objects_per_bucket 20
# TAG: client_db onoff
# If you want to disable collecting per-client statistics, then
# turn off client_db here.
#
#Default:
# client_db on
# TAG: netdb_low
# TAG: netdb_high
# The low and high water marks for the ICMP measurement
# database. These are counts, not percents. The defaults are
# 900 and 1000. When the high water mark is reached, database
# entries will be deleted until the low mark is reached.
#
#Default:
# netdb_low 900
# netdb_high 1000
# TAG: netdb_ping_period
# The minimum period for measuring a site. There will be at
# least this much delay between successive pings to the same
# network. The default is five minutes.
#
#Default:
# netdb_ping_period 5 minutes
# TAG: query_icmp onoff
# If you want to ask your peers to include ICMP data in their ICP
# replies, enable this option.
#
# If your peer has configured Squid (during compilation) with
# '--enable-icmp' then that peer will send ICMP pings to origin server
# sites of the URLs it receives. If you enable this option then the
# ICP replies from that peer will include the ICMP data (if available).
# Then, when choosing a parent cache, Squid will choose the parent with
# the minimal RTT to the origin server. When this happens, the
# hierarchy field of the access.log will be
# "CLOSEST_PARENT_MISS". This option is off by default.
#
#Default:
# query_icmp off
# TAG: test_reachability onoff
# When this is 'on', ICP MISS replies will be ICP_MISS_NOFETCH
# instead of ICP_MISS if the target host is NOT in the ICMP
# database, or has a zero RTT.
#
#Default:
# test_reachability off
# TAG: buffered_logs onoff
# Some log files (cache.log, useragent.log) are written with
# stdio functions, and as such they can be buffered or
# unbuffered. By default they will be unbuffered. Buffering them
# can speed up the writing slightly (though you are unlikely to
# need to worry).
#
#Default:
# buffered_logs off
buffered_logs on
# TAG: reload_into_ims onoff
# When you enable this option, client no-cache or ``reload''
# requests will be changed to If-Modified-Since requests.
# Doing this VIOLATES the HTTP standard. Enabling this
# feature could make you liable for problems which it
# causes.
#
# see also refresh_pattern for a more selective approach.
#
# This option may be disabled by using --disable-http-violations
# with the configure script.
#
#Default:
# reload_into_ims off
# TAG: always_direct
# Usage: always_direct allowdeny [!]aclname ...
#
# Here you can use ACL elements to specify requests which should
# ALWAYS be forwarded directly to origin servers. For example,
# to always directly forward requests for local servers use
# something like:
#
# acl local-servers dstdomain my.domain.net
# always_direct allow local-servers
#
# To always forward FTP requests directly, use
#
# acl FTP proto FTP
# always_direct allow FTP
#
# NOTE: There is a similar, but opposite option named
# 'never_direct'. You need to be aware that "always_direct deny
# foo" is NOT the same thing as "never_direct allow foo". You
# may need to use a deny rule to exclude a more-specific case of
# some other rule. Example:
#
# acl local-external dstdomain external.foo.net
# acl local-servers dstdomain foo.net
# always_direct deny local-external
# always_direct allow local-servers
#
# This option replaces some v1.1 options such as local_domain
# and local_ip.
#
#Default:
# none
# TAG: never_direct
# Usage: never_direct allowdeny [!]aclname ...
#
# never_direct is the opposite of always_direct. Please read
# the description for always_direct if you have not already.
#
# With 'never_direct' you can use ACL elements to specify
# requests which should NEVER be forwarded directly to origin
# servers. For example, to force the use of a proxy for all
# requests, except those in your local domain use something like:
#
# acl local-servers dstdomain foo.net
# acl all src 0.0.0.0/0.0.0.0
# never_direct deny local-servers
# never_direct allow all
#
# or if squid is inside a firewall and there is local intranet
# servers inside the firewall then use something like:
#
# acl local-intranet dstdomain foo.net
# acl local-external dstdomain external.foo.net
# always_direct deny local-external
# always_direct allow local-intranet
# never_direct allow all
#
# This option replaces some v1.1 options such as inside_firewall
# and firewall_ip.
#
#Default:
# none
# TAG: anonymize_headers
# Usage: anonymize_headers allowdeny header_name ...
#
# This option replaces the old 'http_anonymizer' option with
# something that is much more configurable. You may now
# specify exactly which headers are to be allowed, or which
# are to be removed from outgoing requests.
#
# There are two methods of using this option. You may either
# allow specific headers (thus denying all others), or you
# may deny specific headers (thus allowing all others).
#
# For example, to achieve the same behavior as the old
# 'http_anonymizer standard' option, you should use:
#
# anonymize_headers deny From Referer Server
# anonymize_headers deny User-Agent WWW-Authenticate Link
#
# Or, to reproduce the old 'http_anonymizer paranoid' feature
# you should use:
#
# anonymize_headers allow Allow Authorization Cache-Control
# anonymize_headers allow Content-Encoding Content-Length
# anonymize_headers allow Content-Type Date Expires Host
# anonymize_headers allow If-Modified-Since Last-Modified
# anonymize_headers allow Location Pragma Accept
# anonymize_headers allow Accept-Encoding Accept-Language
# anonymize_headers allow Content-Language Mime-Version
# anonymize_headers allow Retry-After Title Connection
# anonymize_headers allow Proxy-Connection
#
# NOTE: You can not mix "allow" and "deny". All 'anonymize_headers'
# lines must have the same second argument.
#
# By default, all headers are allowed (no anonymizing is
# performed).
#
#Default:
# none
# TAG: fake_user_agent
# If you filter the User-Agent header with 'anonymize_headers' it
# may cause some Web servers to refuse your request. Use this to
# fake one up. For example:
#
# fake_user_agent Nutscrape/1.0 (CP/M; 8-bit)
# (credit to Paul Southworth pauls@etext.org for this one!)
#
#Default:
# none
# TAG: icon_directory
# Where the icons are stored. These are normally kept in
# /usr/lib/squid/icons
#
#Default:
# icon_directory /usr/lib/squid/icons
# TAG: error_directory
# Directory where the error files are read from.
# /usr/lib/squid/errors contains sets of error files
# in different languages. The default error directory
# is /etc/squid/errors, which is a link to one of these
# error sets.
#
# If you wish to create your own versions of the error files,
# either to customize them to suit your language or company,
# copy the template English files to another
# directory and point this tag at them.
#
#error_directory /etc/squid/errors
#
#Default:
# error_directory /etc/squid/errors
# TAG: minimum_retry_timeout (seconds)
# This specifies the minimum connect timeout, for when the
# connect timeout is reduced to compensate for the availability
# of multiple IP addresses.
#
# When a connection to a host is initiated, and that host has
# several IP addresses, the default connection timeout is reduced
# by dividing it by the number of addresses. So, a site with 15
# addresses would then have a timeout of 8 seconds for each
# address attempted. To avoid having the timeout reduced to the
# point where even a working host would not have a chance to
# respond, this setting is provided. The default, and the
# minimum value, is five seconds, and the maximum value is sixty
# seconds, or half of connect_timeout, whichever is greater and
# less than connect_timeout.
#
#Default:
# minimum_retry_timeout 5 seconds
# TAG: maximum_single_addr_tries
# This sets the maximum number of connection attempts for a
# host that only has one address (for multiple-address hosts,
# each address is tried once).
#
# The default value is three tries, the (not recommended)
# maximum is 255 tries. A warning message will be generated
# if it is set to a value greater than ten.
#
#Default:
# maximum_single_addr_tries 3
# TAG: snmp_port
# Squid can now serve statistics and status information via SNMP.
# A value of "0" disables SNMP support. If you wish to use SNMP,
# set this to "3401" to use the normal SNMP port.
#
# NOTE: SNMP support requires use the --enable-snmp configure
# command line option.
#
#Default:
# snmp_port 0
# TAG: snmp_access
# Allowing or denying access to the SNMP port.
#
# All access to the agent is denied by default.
# usage:
#
# snmp_access allowdeny [!]aclname ...
#
#Example:
# snmp_access allow snmppublic localhost
# snmp_access deny all
#
#Default:
# snmp_access deny all
# TAG: snmp_incoming_address
# TAG: snmp_outgoing_address
# Just like 'udp_incoming_address' above, but for the SNMP port.
#
# snmp_incoming_address is used for the SNMP socket receiving
# messages from SNMP agents.
# snmp_outgoing_address is used for SNMP packets returned to SNMP
# agents.
#
# The default snmp_incoming_address (0.0.0.0) is to listen on all
# available network interfaces.
#
# If snmp_outgoing_address is set to 255.255.255.255 (the default)
# then it will use the same socket as snmp_incoming_address. Only
# change this if you want to have SNMP replies sent using another
# address than where this Squid listens for SNMP queries.
#
# NOTE, snmp_incoming_address and snmp_outgoing_address can not have
# the same value since they both use port 3401.
#
#Default:
# snmp_incoming_address 0.0.0.0
# snmp_outgoing_address 255.255.255.255
# TAG: as_whois_server
# WHOIS server to query for AS numbers. NOTE: AS numbers are
# queried only when Squid starts up, not for every request.
#
#Default:
# as_whois_server whois.ra.net
# as_whois_server whois.ra.net
# TAG: wccp_router
# Use this option to define your WCCP ``home'' router for
# Squid. Setting the 'wccp_router' to 0.0.0.0 (the default)
# disables WCCP.
#
#Default:
# wccp_router 0.0.0.0
# TAG: wccp_version
# According to some users, Cisco IOS 11.2 only supports WCCP
# version 3. If you're using that version of IOS, change
# this value to 3.
#
#Default:
# wccp_version 4
# TAG: wccp_incoming_address
# TAG: wccp_outgoing_address
# wccp_incoming_address Use this option if you require WCCP
# messages to be received on only one
# interface. Do NOT use this option if
# you're unsure how many interfaces you
# have, or if you know you have only one
# interface.
#
# wccp_outgoing_address Use this option if you require WCCP
# messages to be sent out on only one
# interface. Do NOT use this option if
# you're unsure how many interfaces you
# have, or if you know you have only one
# interface.
#
# The default behavior is to not bind to any specific address.
#
# NOTE, wccp_incoming_address and wccp_outgoing_address can not have
# the same value since they both use port 2048.
#
#Default:
# wccp_incoming_address 0.0.0.0
# wccp_outgoing_address 255.255.255.255
# DELAY POOL PARAMETERS (all require DELAY_POOLS compilation option)
# -----------------------------------------------------------------------------
# TAG: delay_pools
# This represents the number of delay pools to be used. For example,
# if you have one class 2 delay pool and one class 3 delays pool, you
# have a total of 2 delay pools.
#
# To enable this option, you must use --enable-delay-pools with the
# configure script.
#
#Default:
# delay_pools 0
# TAG: delay_class
# This defines the class of each delay pool. There must be exactly one
# delay_class line for each delay pool. For example, to define two
# delay pools, one of class 2 and one of class 3, the settings above
# and here would be:
#
#Example:
# delay_pools 2 # 2 delay pools
# delay_class 1 2 # pool 1 is a class 2 pool
# delay_class 2 3 # pool 2 is a class 3 pool
#
# The delay pool classes are:
#
# class 1 Everything is limited by a single aggregate
# bucket.
#
# class 2 Everything is limited by a single aggregate
# bucket as well as an "individual" bucket chosen
# from bits 25 through 32 of the IP address.
#
# class 3 Everything is limited by a single aggregate
# bucket as well as a "network" bucket chosen
# from bits 17 through 24 of the IP address and a
# "individual" bucket chosen from bits 17 through
# 32 of the IP address.
#
# NOTE: If an IP address is a.b.c.d
# -> bits 25 through 32 are "d"
# -> bits 17 through 24 are "c"
# -> bits 17 through 32 are "c * 256 + d"
#
#Default:
# none
# TAG: delay_access
# This is used to determine which delay pool a request falls into.
# The first matched delay pool is always used, i.e., if a request falls
# into delay pool number one, no more delay are checked, otherwise the
# rest are checked in order of their delay pool number until they have
# all been checked. For example, if you want some_big_clients in delay
# pool 1 and lotsa_little_clients in delay pool 2:
#
#Example:
# delay_access 1 allow some_big_clients
# delay_access 1 deny all
# delay_access 2 allow lotsa_little_clients
# delay_access 2 deny all
#
#Default:
# none
# TAG: delay_parameters
# This defines the parameters for a delay pool. Each delay pool has
# a number of "buckets" associated with it, as explained in the
# description of delay_class. For a class 1 delay pool, the syntax is:
#
#delay_parameters pool aggregate
#
# For a class 2 delay pool:
#
#delay_parameters pool aggregate individual
#
# For a class 3 delay pool:
#
#delay_parameters pool aggregate network individual
#
# The variables here are:
#
# pool a pool number - ie, a number between 1 and the
# number specified in delay_pools as used in
# delay_class lines.
#
# aggregate the "delay parameters" for the aggregate bucket
# (class 1, 2, 3).
#
# individual the "delay parameters" for the individual
# buckets (class 2, 3).
#
# network the "delay parameters" for the network buckets
# (class 3).
#
# A pair of delay parameters is written restore/maximum, where restore is
# the number of bytes (not bits - modem and network speeds are usually
# quoted in bits) per second placed into the bucket, and maximum is the
# maximum number of bytes which can be in the bucket at any time.
#
# For example, if delay pool number 1 is a class 2 delay pool as in the
# above example, and is being used to strictly limit each host to 64kbps
# (plus overheads), with no overall limit, the line is:
#
#delay_parameters 1 -1/-1 8000/8000
#
# Note that the figure -1 is used to represent "unlimited".
#
# And, if delay pool number 2 is a class 3 delay pool as in the above
# example, and you want to limit it to a total of 256kbps (strict limit)
# with each 8-bit network permitted 64kbps (strict limit) and each
# individual host permitted 4800bps with a bucket maximum size of 64kb
# to permit a decent web page to be downloaded at a decent speed
# (if the network is not being limited due to overuse) but slow down
# large downloads more significantly:
#
#delay_parameters 2 32000/32000 8000/8000 600/64000
#
# There must be one delay_parameters line for each delay pool.
#
#Default:
# none
# TAG: delay_initial_bucket_level (percent, 0-100)
# The initial bucket percentage is used to determine how much is put
# in each bucket when squid starts, is reconfigured, or first notices
# a host accessing it (in class 2 and class 3, individual hosts and
# networks only have buckets associated with them once they have been
# "seen" by squid).
#
#Default:
# delay_initial_bucket_level 50
# TAG: incoming_icp_average
# TAG: incoming_http_average
# TAG: incoming_dns_average
# TAG: min_icp_poll_cnt
# TAG: min_dns_poll_cnt
# TAG: min_http_poll_cnt
# Heavy voodoo here. I can't even believe you are reading this.
# Are you crazy? Don't even think about adjusting these unless
# you understand the algorithms in comm_select.c first!
#
#Default:
# incoming_icp_average 6
# incoming_http_average 4
# incoming_dns_average 4
# min_icp_poll_cnt 8
# min_dns_poll_cnt 8
# min_http_poll_cnt 8
# TAG: max_open_disk_fds
# To avoid having disk as the I/O bottleneck Squid can optionally
# bypass the on-disk cache if more than this amount of disk file
# descriptors are open.
#
# A value of 0 indicates no limit.
#
#Default:
# max_open_disk_fds 0
# TAG: offline_mode
# Enable this option and Squid will never try to validate cached
# objects.
#
#Default:
# offline_mode off
# TAG: uri_whitespace
# What to do with requests that have whitespace characters in the
# URI. Options:
#
# strip: The whitespace characters are stripped out of the URL.
# This is the behavior recommended by RFC2616.
# deny: The request is denied. The user receives an "Invalid
# Request" message.
# allow: The request is allowed and the URI is not changed. The
# whitespace characters remain in the URI. Note the
# whitespace is passed to redirector processes if they
# are in use.
# encode: The request is allowed and the whitespace characters are
# encoded according to RFC1738. This could be considered
# a violation of the HTTP/1.1
# RFC because proxies are not allowed to rewrite URI's.
# chop: The request is allowed and the URI is chopped at the
# first whitespace. This might also be considered a
# violation.
#
#Default:
# uri_whitespace strip
# TAG: broken_posts
# A list of ACL elements which, if matched, causes Squid to send
# a extra CRLF pair after the body of a PUT/POST request.
#
# Some HTTP servers has broken implementations of PUT/POST,
# and rely on a extra CRLF pair sent by some WWW clients.
#
# Quote from RFC 2068 section 4.1 on this matter:
#
# Note: certain buggy HTTP/1.0 client implementations generate an
# extra CRLF's after a POST request. To restate what is explicitly
# forbidden by the BNF, an HTTP/1.1 client must not preface or follow
# a request with an extra CRLF.
#
#Example:
# acl buggy_server url_regex ^http://....
# broken_posts allow buggy_server
#
#Default:
# none
# TAG: mcast_miss_addr
# Note: This option is only available if Squid is rebuilt with the
# -DMULTICAST_MISS_STREAM option
#
# If you enable this option, every "cache miss" URL will
# be sent out on the specified multicast address.
#
# Do not enable this option unless you are are absolutely
# certain you understand what you are doing.
#
#Default:
# mcast_miss_addr 255.255.255.255
# TAG: mcast_miss_ttl
# Note: This option is only available if Squid is rebuilt with the
# -DMULTICAST_MISS_TTL option
#
# This is the time-to-live value for packets multicasted
# when multicasting off cache miss URLs is enabled. By
# default this is set to 'site scope', i.e. 16.
#
#Default:
# mcast_miss_ttl 16
# TAG: mcast_miss_port
# Note: This option is only available if Squid is rebuilt with the
# -DMULTICAST_MISS_STREAM option
#
# This is the port number to be used in conjunction with
# 'mcast_miss_addr'.
#
#Default:
# mcast_miss_port 3135
# TAG: mcast_miss_encode_key
# Note: This option is only available if Squid is rebuilt with the
# -DMULTICAST_MISS_STREAM option
#
# The URLs that are sent in the multicast miss stream are
# encrypted. This is the encryption key.
#
#Default:
# mcast_miss_encode_key XXXXXXXXXXXXXXXX
# TAG: nonhierarchical_direct
# By default, Squid will send any non-hierarchical requests
# (matching hierarchy_stoplist or not cachable request type) direct
# to origin servers.
#
# If you set this to off, then Squid will prefer to send these
# requests to parents.
#
# Note that in most configurations, by turning this off you will only
# add latency to these request without any improvement in global hit
# ratio.
#
# If you are inside an firewall then see never_direct instead of
# this directive.
#
#Default:
# nonhierarchical_direct on
# TAG: prefer_direct
# Normally Squid tries to use parents for most requests. If you by some
# reason like it to first try going direct and only use a parent if
# going direct fails then set this to off.
#
# By combining nonhierarchical_direct off and prefer_direct on you
# can set up Squid to use a parent as a backup path if going direct
# fails.
#
#Default:
# prefer_direct off
# TAG: strip_query_terms
# By default, Squid strips query terms from requested URLs before
# logging. This protects your user's privacy.
#
#Default:
# strip_query_terms on
# TAG: coredump_dir
# By default Squid leaves core files in the first cache_dir
# directory. If you set 'coredump_dir' to a directory
# that exists, Squid will chdir() to that directory at startup
# and coredump files will be left there.
#
#Default:
# none
# TAG: redirector_bypass
# When this is 'on', a request will not go through the
# redirector if all redirectors are busy. If this is 'off'
# and the redirector queue grows too large, Squid will exit
# with a FATAL error and ask you to increase the number of
# redirectors. You should only enable this if the redirectors
# are not critical to your caching system. If you use
# redirectors for access control, and you enable this option,
# then users may have access to pages that they should not
# be allowed to request.
#
#Default:
# redirector_bypass off
# TAG: ignore_unknown_nameservers
# By default Squid checks that DNS responses are received
# from the same IP addresses that they are sent to. If they
# don't match, Squid ignores the response and writes a warning
# message to cache.log. You can allow responses from unknown
# nameservers by setting this option to 'off'.
#
#Default:
# ignore_unknown_nameservers on
# TAG: digest_generation
# Note: This option is only available if Squid is rebuilt with the
# --enable-cache-digests option
#
# This controls whether the server will generate a Cache Digest
# of its contents. By default, Cache Digest generation is
# enabled if Squid is compiled with USE_CACHE_DIGESTS defined.
#
#Default:
# digest_generation on
# TAG: digest_bits_per_entry
# Note: This option is only available if Squid is rebuilt with the
# --enable-cache-digests option
#
# This is the number of bits of the server's Cache Digest which
# will be associated with the Digest entry for a given HTTP
# Method and URL (public key) combination. The default is 5.
#
#Default:
# digest_bits_per_entry 5
# TAG: digest_rebuild_period (seconds)
# Note: This option is only available if Squid is rebuilt with the
# --enable-cache-digests option
#
# This is the number of seconds between Cache Digest rebuilds.
#
#Default:
# digest_rebuild_period 1 hour
# TAG: digest_rewrite_period (seconds)
# Note: This option is only available if Squid is rebuilt with the
# --enable-cache-digests option
#
# This is the number of seconds between Cache Digest writes to
# disk.
#
#Default:
# digest_rewrite_period 1 hour
# TAG: digest_swapout_chunk_size (bytes)
# Note: This option is only available if Squid is rebuilt with the
# --enable-cache-digests option
#
# This is the number of bytes of the Cache Digest to write to
# disk at a time. It defaults to 4096 bytes (4KB), the Squid
# default swap page.
#
#Default:
# digest_swapout_chunk_size 4096 bytes
# TAG: digest_rebuild_chunk_percentage (percent, 0-100)
# Note: This option is only available if Squid is rebuilt with the
# --enable-cache-digests option
#
# This is the percentage of the Cache Digest to be scanned at a
# time. By default it is set to 10% of the Cache Digest.
#
#Default:
# digest_rebuild_chunk_percentage 10
# TAG: chroot
# Use this to have Squid do a chroot() while initializing. This
# also causes Squid to fully drop root privileges after
# initializing. This means, for example, that if you use a HTTP
# port less than 1024 and try to reconfigure, you will get an
# error.
#
#Default:
# none
# TAG: client_persistent_connections
# TAG: server_persistent_connections
# Persistent connection support for clients and servers. By
# default, Squid uses persistent connections (when allowed)
# with its clients and servers. You can use these options to
# disable persistent connections with clients and/or servers.
#
#Default:
# client_persistent_connections on
# server_persistent_connections on
# TAG: pipeline_prefetch
# To boost the performance of pipelined requests to closer
# match that of a non-proxied environment Squid tries to fetch
# up to two requests in parallell from a pipeline.
#
#Default:
# pipeline_prefetch on
# TAG: extension_methods
# Squid only knows about standardized HTTP request methods.
# You can add up to 20 additional "extension" methods here.
#
#Default:
# none
# TAG: high_response_time_warning (msec)
# If the one-minute median response time exceeds this value,
# Squid prints a WARNING with debug level 0 to get the
# administrators attention. The value is in milliseconds.
#
#Default:
# high_response_time_warning 0
# TAG: high_page_fault_warning
# If the one-minute average page fault rate exceeds this
# value, Squid prints a WARNING with debug level 0 to get
# the administrators attention. The value is in page faults
# per second.
#
#Default:
# high_page_fault_warning 0
# TAG: high_memory_warning
# If the memory usage (as determined by mallinfo) exceeds
# value, Squid prints a WARNING with debug level 0 to get
# the administrators attention.
#
#Default:
# high_memory_warning 0
# TAG: store_dir_select_algorithm
# Set this to 'round-robin' as an alternative.
#
#Default:
# store_dir_select_algorithm least-load
# TAG: forward_log
# Note: This option is only available if Squid is rebuilt with the
# -DWIP_FWD_LOG option
#
# Logs the server-side requests.
#
# This is currently work in progress.
#
#Default:
# none
# TAG: ie_refresh onoff
# Microsoft Internet Explorer up until version 5.5 Service
# Pack 1 has an issue with transparent proxies, wherein it
# is impossible to force a refresh. Turning this on provides
# a partial fix to the problem, by causing all IMS-REFRESH
# requests from older IE versions to check the origin server
# for fresh content. This reduces hit ratio by some amount
# (~10% in my experience), but allows users to actually get
# fresh content when they want it. Note that because Squid
# cannot tell if the user is using 5.5 or 5.5SP1, the behavior
# of 5.5 is unchanged from old versions of Squid (i.e. a
# forced refresh is impossible). Newer versions of IE will,
# hopefully, continue to have the new behavior and will be
# handled based on that assumption. This option defaults to
# the old Squid behavior, which is better for hit ratios but
# worse for clients using IE, if they need to be able to
# force fresh content.
#
#Default:
# ie_refresh off
Subscribe to:
Posts (Atom)